Search This Blog

Wednesday, December 17, 2025

Open-source software

From Wikipedia, the free encyclopedia
A screenshot of Debian Linux running the Cinnamon desktop environment, featuring Firefox (accessing Wikipedia which uses MediaWiki), LibreOffice Writer, Vim, VLC and Nemo file manager, all of which are open-source software

Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose. Open-source software may be developed in a collaborative, public manner. Open-source software is a prominent example of open collaboration, meaning any capable user is able to participate online in development, making the number of possible contributors indefinite. The ability to examine the code facilitates public trust in the software.

Open-source software development can bring in diverse perspectives beyond those of a single company. A 2024 estimate of the value of open-source software to firms is $8.8 trillion, as firms would need to spend 3.5 times the amount they currently do without the use of open source software.

Open-source code can be used for studying and allows capable end users to adapt software to their personal needs in a similar way user scripts and custom style sheets allow for web sites, and eventually publish the modification as a fork for users with similar preferences, and directly submit possible improvements as pull requests.

Definitions

The logo of the Open Source Initiative

The Open Source Initiative's (OSI) definition is recognized by several governments internationally as the standard or de facto definition. OSI uses The Open Source Definition to determine whether it considers a software license open source. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens. Perens did not base his writing on the "four freedoms" from the Free Software Foundation (FSF), which were only widely available later.

Under Perens' definition, open source is a broad software license that makes source code available to the general public with relaxed or non-existent restrictions on the use and modification of the code. It is an explicit "feature" of open source that it puts very few restrictions on the use or distribution by any organization or user, in order to enable the rapid evolution of the software.

According to Feller et al. (2005), the terms "free software" and "open-source software" should be applied to any "software products distributed under terms that allow users" to use, modify, and redistribute the software "in any manner they see fit, without requiring that they pay the author(s) of the software a royalty or fee for engaging in the listed activities."

Despite initially accepting it, Richard Stallman of the FSF now flatly opposes the term "Open Source" being applied to what they refer to as "free software". Although he agrees that the two terms describe "almost the same category of software", Stallman considers equating the terms incorrect and misleading. Stallman also opposes the professed pragmatism of the Open Source Initiative, as he fears that the free software ideals of freedom and community are threatened by compromising on the FSF's idealistic standards for software freedom. The FSF considers free software to be a subset of open-source software, and Richard Stallman explained that DRM software, for example, can be developed as open source, despite that it does not give its users freedom (it restricts them), and thus does not qualify as free software.

Open-source software development

Development model

In his 1997 essay The Cathedral and the Bazaar, open-source influential contributor Eric S. Raymond suggests a model for developing OSS known as the bazaar model. Raymond likens the development of software by traditional methodologies to building a cathedral, with careful isolated work by individuals or small groups. He suggests that all software should be developed using the bazaar style, with differing agendas and approaches.

In the traditional model of development, which he called the cathedral model, development takes place in a centralized way. Roles are clearly defined. Roles include people dedicated to designing (the architects), people responsible for managing the project, and people responsible for implementation. Traditional software engineering follows the cathedral model.

The bazaar model, however, is different. In this model, roles are not clearly defined. Some proposed characteristics of software developed using the bazaar model should exhibit the following patterns:

  • Users should be treated as co-developers: The users are treated like co-developers and so they should have access to the source code of the software. Furthermore, users are encouraged to submit additions to the software, code fixes for the software, bug reports, documentation, etc. Having more co-developers increases the rate at which the software evolves. Linus's law states that given enough eyeballs all bugs are shallow. This means that if many users view the source code, they will eventually find all bugs and suggest how to fix them. Some users have advanced programming skills, and furthermore, each user's machine provides an additional testing environment. This new testing environment offers the ability to find and fix a new bug.
  • Early releases: The first version of the software should be released as early as possible so as to increase one's chances of finding co-developers early.
  • Frequent integration: Code changes should be integrated (merged into a shared code base) as often as possible so as to avoid the overhead of fixing a large number of bugs at the end of the project life cycle. Some open-source projects have nightly builds where integration is done automatically.
  • Several versions: There should be at least two versions of the software. There should be a buggier version with more features and a more stable version with fewer features. The buggy version (also called the development version) is for users who want the immediate use of the latest features and are willing to accept the risk of using code that is not yet thoroughly tested. The users can then act as co-developers, reporting bugs and providing bug fixes.
  • High modularization: The general structure of the software should be modular allowing for parallel development on independent components.
  • Dynamic decision-making structure: There is a need for a decision-making structure, whether formal or informal, that makes strategic decisions depending on changing user requirements and other factors. This is in comparison with extreme programming.

The process of Open source development begins with a requirements elicitation where developers consider if they should add new features or if a bug needs to be fixed in their project. This is established by communicating with the OSS community through avenues such as bug reporting and tracking or mailing lists and project pages. Next, OSS developers select or are assigned to a task and identify a solution. Because there are often many different possible routes for solutions in OSS, the best solution must be chosen with careful consideration and sometimes even peer feedback. The developer then begins to develop and commit the code. The code is then tested and reviewed by peers. Developers can edit and evolve their code through feedback from continuous integration. Once the leadership and community are satisfied with the whole project, it can be partially released and user instruction can be documented. If the project is ready to be released, it is frozen, with only serious bug fixes or security repairs occurring. Finally, the project is fully released and only changed through minor bug fixes.

Advantages

Open source implementation of a standard can increase the adoption and long-term viability of that standard. It often fosters developer loyalty, as contributors feel a greater sense of participation and ownership in the development process and the end product.

Moreover, lower costs of marketing and logistical services are needed for OSS. OSS can be a tool to promote a company's image, including its commercial products. The OSS development approach has helped produce reliable, high quality software quickly and inexpensively.

Open source development offers the potential to quicken innovation and create of social value. In France for instance, a policy that incentivized government to favor free open-source software increased to nearly 600,000 OSS contributions per year, generating social value by increasing the quantity and quality of open-source software. This policy also led to an estimated increase of up to 18% of tech startups and a 14% increase in the number of people employed in the IT sector.

OSS can be highly reliable when it has thousands of independent programmers testing and fixing bugs of the software. Open source is not dependent on the company or author that originally created it. Even if the company fails, the code continues to exist and be developed by its users.

OSS is flexible because modular systems allow programmers to build custom interfaces, or add new abilities to it and it is innovative since open-source programs are the product of collaboration among a large number of different programmers. The mix of divergent perspectives, corporate objectives, and personal goals speeds up innovation.

Moreover, free software can be developed in accordance with purely technical requirements. It does not require thinking about commercial pressure that often degrades the quality of the software. Commercial pressures make traditional software developers pay more attention to customers' requirements than to security requirements, since such features are somewhat invisible to the customer.

Development tools

In open-source software development, tools are used to support the development of the product and the development process itself.

Version control systems such as Centralized Version control system (CVCS) and the distributed version control system (DVCS) are examples of tools, often open source, that help manage the source code files and the changes to those files for a software project in order to foster collaboration. CVCS are centralized with a central repository while DVCS are decentralized and have a local repository for every user. Concurrent Versions System (CVS) and later Subversion (SVN) are examples of CVCS, whereas Git is a DVCS and the most widely used version control software. The repositories are hosted and published on source-code-hosting facilities such as GitHub or Gitlab.

Open-source projects use utilities such as issue trackers to organize open-source software development. Commonly used bug trackers include Bugzilla and Redmine.

Tools such as mailing lists and IRC provide means of coordination and discussion of bugs among developers. Project web pages, wiki pages, roadmap lists and newsgroups allow for the distribution of project information that focuses on end users.

Opportunities for participation

Contributing

The basic roles OSS participants can fall into multiple categories, beginning with leadership at the center of the project who have control over its execution. Next are the core contributors with a great deal of experience and authority in the project who may guide the other contributors. Non-core contributors have less experience and authority, but regularly contribute and are vital to the project's development. New contributors are the least experienced but with mentorship and guidance can become regular contributors.

Some possible ways of contributing to open-source software include such roles as programming, maintaining, user interface design and testing, web design, bug triage, accessibility design and testing, UX design, code testing, and security review and testing. However, there are several ways of contributing to OSS projects even without coding skills. For example, some less technical ways of participating are documentation writing and editing, translation, project management, event organization and coordination, marketing, release management, community management, and public relations and outreach.

Funding is another way that individuals and organizations choose to contribute to open source projects. Groups like Open Collective provide a means for individuals to contribute monthly to supporting their favorite projects. Organizations like the Sovereign Tech Fund is able to contribute to millions to supporting the tools the German Government uses. The National Science Foundation established a Pathways to Enable Open-Source Ecosystems (POSE) program to support open source innovation.

Industry participation

The adoption of open-source software by industry is increasing over time. OSS is popular in several industries such as telecommunications, aerospace, healthcare, and media & entertainment due to the benefits it provides. Adoption of OSS is more likely in larger organizations and is dependent on the company's IT usage, operating efficiencies, and the productivity of employees.

Industries are likely to use OSS due to back-office functionality, sales support, research and development, software features, quick deployment, portability across platforms and avoidance of commercial license management. Additionally, lower cost for hardware and ownership are also important benefits.

Prominent organizations

Organizations that contribute to the development and expansions of free and open-source software movements exist all over the world. These organizations are dedicated to goals such as teaching and spreading technology. As listed by a former vice president of the Open Source Initiative, some American organizations include the Free Software Foundation, Software Freedom Conservancy, the Open Source Initiative and Software in the Public Interest. Within Europe some notable organizations are Free Software Foundation Europe, open-source projects EU (OSP) and OpenForum Europe (OFE). One Australian organization is Linux Australia while Asia has Open source Asia and FOSSAsia. Free and open source software for Africa (FOSSFA) and OpenAfrica are African organizations and Central and South Asia has such organizations as FLISOL and GRUP de usuarios de software libre Peru. Outside of these, many more organizations dedicated to the advancement of open-source software exist.

Licensing

FOSS products are generally licensed under two types of licenses: permissive licensing and copyleft licensing. Both of these types of licenses are different than proprietary licensing in that they can allow more users access to the software and allow for the creation of derivative works as specified by the terms of the specific license, as each license has its own rules. Permissive licenses allow recipients of the software to implement the author's copyright rights without having to use the same license for distribution. Examples of this type of license include the BSD, MIT, and Apache licenses. Copyleft licenses are different in that they require recipients to use the same license for at least some parts of the distribution of their works. Strong copyleft licenses require all derivative works to use the same license while weak copyleft licenses require the use of the same license only under certain conditions. Examples of this type of license include the GNU family of licenses, and the MPL and EPL licenses. The similarities between these two categories of licensing include that they provide a broad grant of copyright rights, require that recipients preserve copyright notices, and that a copy of the license is provided to recipients with the code.

One important legal precedent for open-source software was created in 2008, when the Jacobson v Katzer case enforced terms of the Artistic license, including attribution and identification of modifications. The ruling of this case cemented enforcement under copyright law when the conditions of the license were not followed. Because of the similarity of the Artistic license to other open-source software licenses, the ruling created a precedent that applied widely.

Examples of free-software license / open-source licenses include Apache licenses, BSD licenses, GNU General Public Licenses, GNU Lesser General Public License, MIT License, Eclipse Public License and Mozilla Public License.

Several gray areas exist within software regulation that have great impact on open-source software, such as if software is a good or service, what can be considered a modification, governance through contract vs license, ownership and right of use. While there have been developments on these issues, they often lead to even more questions. The existence of these uncertainties in regulation has a negative impact on industries involved in technologies as a whole.

Within the legal history of software as a whole, there was much debate on whether to protect it as intellectual property under patent law, copyright law or establishing a unique regulation. Ultimately, copyright law became the standard with computer programs being considered a form of literary work, with some tweaks of unique regulation.

Software is generally considered source code and object code, with both being protectable, though there is legal variety in this definition. Some jurisdictions attempt to expand or reduce this conceptualization for their own purposes. For example, The European Court of Justice defines a computer program as not including the functionality of a program, the programing language, or the format of data files. By limiting protections of the different aspects of software, the law favors an open-source approach to software use. The US especially has an open approach to software, with most open-source licenses originating there. However, this has increased the focus on patent rights within these licenses, which has seen backlash from the OSS community, who prefer other forms of IP protection.

Another issue includes technological protection measures (TPM) and digital rights management (DRM) techniques which were internationally legally recognized and protected in the 1996 World Intellectual Property Organization (WIPO) Treaty. Open source software proponents disliked these technologies as they constrained end-users potentially beyond copyright law. Europe responded to such complaints by putting TPM under legal controls, representing a victory for OSS supporters.

Economic/business implications

Participants in the Free Knowledge Game Jam 2015, an open source and open data oriented game jam

In open-source communities, instead of owning the software produced, the producer owns the development of the evolving software. In this way, the future of the software is open, making ownership or intellectual property difficult within OSS. Licensing and branding can prevent others from stealing it, preserving its status as a public good. Open source software can be considered a public good as it is available to everyone and does not decrease in value for others when downloaded by one person. Open source software is unique in that it becomes more valuable as it is used and contributed to, instead of diminishing the resource. This is explained by concepts such as investment in reputation and network effects.

The economic model of open-source software can be explained as developers contribute work to projects, creating public benefits. Developers choose projects based on the perceived benefits or costs, such as improved reputation or value of the project. The motivations of developers can come from many different places and reasons, but the important takeaway is that money is not the only or even most important incentivization.

Because economic theory mainly focuses on the consumption of scarce resources, the OSS dynamic can be hard to understand. In OSS, producers become consumers by reaping the rewards of contributing to a project. For example, a developer becomes well regarded by their peers for a successful contribution to an OSS project. The social benefits and interactions of OSS are difficult to account for in economic models as well. Furthermore, the innovation of technology creates constantly changing value discussions and outlooks, making economic model unable to predict social behavior.

Although OSS is theoretically challenging in economic models, it is explainable as a sustainable social activity that requires resources. These resources include time, money, technology and contributions. Many developers have used technology funded by organizations such as universities and governments, though these same organizations benefit from the work done by OSS. As OSS grows, hybrid systems containing OSS and proprietary systems are becoming more common.

Throughout the mid 2000s, more and more tech companies have begun to use OSS. For example, Dell's move of selling computers with Linux already installed. Microsoft itself has launched a Linux-based operating system despite previous animosity with the OSS movement. Despite these developments, these companies tend to only use OSS for certain purposes, leading to worries that OSS is being taken advantage of by corporations and not given anything in return.

Government uses

Many governments are interested in implementing and promoting open-source software due to the many benefits provided: for example, the UK government issued a policy promoting open source and open standards in 2004, restating the policy in 2009: "the Government will actively and fairly consider open source solutions alongside proprietary ones". However, an issue to be considered is cybersecurity. While accidental vulnerabilities are possible, so are attacks by outside agents. Because of these fears, governmental interest in contributing to the governance of software has become more prominent. However, these are the broad strokes of the issue, with each country having their own specific politicized interactions with open-source software and their goals for its implementation. For example, the United States has focused on national security in regard to open-source software implementation due to the perceived threat of the increase of open-source software activity in countries like China and Russia, with the Department of Defense considering multiple criteria for using OSS. These criteria include whether it comes from and is maintained by trusted sources, whether it will continue to be maintained, if there are dependencies on sub-components in the software, component security and integrity, and foreign governmental influence.

Another issue for governments in regard to open source is their investments in technologies such as operating systems, semiconductors, cloud, and artificial intelligence. These technologies all have implications for global cooperation, again opening up security issues and political consequences. Many countries have to balance technological innovation with technological dependence in these partnerships. For example, after China's open-source dependent company Huawei was prevented from using Google's Android system in 2019, they began to create their own alternative operating system: Harmony OS.

Germany recently established a Sovereign Tech Fund, to help support the governance and maintenance of the software that they use.

Open software movement

History

In the early days of computing, particularly during the 1950s and 1960s, programmers and developers commonly shared software to learn from one another and advance the field. Early systems such as Unix even provided users with access to their source code, allowing collaboration and modification. However, with the rise of the commercial software industry in the 1970s and 1980s, this culture of open sharing began to decline as proprietary models became dominant. Despite this shift, academic and research institutions continued to promote collaborative software development practices.

In response, the open-source movement was born out of the work of skilled programmer enthusiasts, widely referred to as hackers or hacker culture. One of these enthusiasts, Richard Stallman, was a driving force behind the free software movement, which would later allow for the open-source movement. In 1984, he resigned from MIT to create a free operating system, GNU, after the programmer culture in his lab was stifled by proprietary software preventing source code from being shared and improved upon. GNU was UNIX compatible, meaning that the programmer enthusiasts would still be familiar with how it worked. However, it quickly became apparent that there was some confusion with the label Stallman had chosen of free software, which he described as free as in free speech, not free beer, referring to the meaning of free as freedom rather than price. He later expanded this concept of freedom to the four essential freedoms. Through GNU, open-source norms of incorporating others' source code, community bug fixes and suggestions of code for new features appeared. In 1985, Stallman founded the Free Software Foundation (FSF) to promote changes in software and to help write GNU. In order to prevent his work from being used in proprietary software, Stallman created the concept of copyleft, which allowed the use of his work by anyone, but under specific terms. To do this, he created the GNU General Public License (GNU GPL) in 1989, which was updated in 1991. In 1991, GNU was combined with the Linux kernel written by Linus Torvalds, as a kernel was missing in GNU. The operating system is now usually referred to as Linux. Throughout this whole period, there were many other free software projects and licenses around at the time, all with different ideas of what the concept of free software was and should be, as well as the morality of proprietary software, such as Berkeley Software Distribution, TeX, and the X Window System.

As free software developed, the Free Software Foundation began to look how to bring free software ideas and perceived benefits to the commercial software industry. It was concluded that FSF's social activism was not appealing to companies and they needed a way to rebrand the free software movement to emphasize the business potential of sharing and collaborating on software source code. The term open source was suggested by Christine Peterson in 1998 at a meeting of supporters of free software. Many in the group felt the name free software was confusing to newcomers and holding back industry interest and they readily accepted the new designation of open source, creating the Open Source Initiative (OSI) and the OSI definition of what open source software is. The Open Source Initiative's (OSI) definition is now recognized by several governments internationally as the standard or de facto definition. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens. The OSI definition differed from the free software definition in that it allows the inclusion of proprietary software and allows more liberties in its licensing. Some, such as Stallman, agree more with the original concept of free software as a result because it takes a strong moral stance against proprietary software, though there is much overlap between the two movements in terms of the operation of the software.

While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, commercial software vendors found themselves increasingly threatened by the concept of freely distributed software and universal access to an application's source code, with an executive of Microsoft calling open source an intellectual property destroyer in 2001. However, while free and open-source software (FOSS) has historically played a role outside of mainstream private software development, companies as large as Microsoft have begun to develop official open source presences on the Internet. IBM, Oracle, and State Farm are just a few of the companies with a serious public stake in today's competitive open source market, marking a significant shift in the corporate philosophy concerning the development of FOSS.

Future

The future of the open source software community, and the free software community by extension, has become successful if not confused about what it stands for. For example, Android and Ubuntu are examples milestones of success in the open source software rise to prominence from the sidelines of technological innovation as it existed in the early 2000s. However, some in the community consider them failures in their representation of OSS due to issues such as the downplaying of the OSS center of Android by Google and its partners, the use of an Apache license that allowed forking and resulted in a loss of opportunities for collaboration within Android, the prioritization of convenience over freedom in Ubuntu, and features within Ubuntu that track users for marketing purposes.

The use of OSS has become more common in business with 78% of companies reporting that they run all or part of their operations on FOSS. The popularity of OSS has risen to the point that Microsoft, a once detractor of OSS, has included its use in their systems. However, this success has raised concerns that will determine the future of OSS as the community must answer questions such as what OSS is, what should it be, and what should be done to protect it, if it even needs protecting. All in all, while the free and open source revolution has slowed to a perceived equilibrium in the market place, that does not mean it is over as many theoretical discussions must take place to determine its future.

Comparisons with other software licensing/development models

Closed source / proprietary software

Open source software differs from proprietary software in that it is publicly available, the license requires no fees, modifications and distributions are allowed under license specifications. All of this works to prevent a monopoly on any OSS product, which is a goal of proprietary software. Proprietary software limits their customers' choices to either committing to using that software, upgrading it or switching to other software, forcing customers to have their software preferences impacted by their monetary cost. The ideal case scenario for the proprietary software vendor would be a lock-in, where the customer does not or cannot switch software due to these costs and continues to buy products from that vendor.

Within proprietary software, bug fixes can only be provided by the vendor, moving platforms requires another purchase and the existence of the product relies on the vendor, who can discontinue it at any point. Additionally, proprietary software does not provide its source code and cannot be altered by users. For businesses, this can pose a security risk and source of frustration, as they cannot specialize the product to their needs, and there may be hidden threats or information leaks within the software that they cannot access or change.

Free software

Under OSI's definition, open source is a broad software license that makes source code available to the general public with relaxed or non-existent restrictions on the use and modification of the code. It is an explicit feature of open source that it puts very few restrictions on the use or distribution by any organization or user, in order to enable the rapid evolution of the software.

Richard Stallman, leader of the Free software movement and member of the free software foundation opposes the term open source being applied to what they refer to as free software. Although he agrees that the two terms describe almost the same category of software, Stallman considers equating the terms incorrect and misleading. He believes that the main difference is that by choosing one term over the other lets others know about what one's goals are: development (open source) or a social stance (free software). Nevertheless, there is significant overlap between open source software and free software. Stallman also opposes the professed pragmatism of the Open Source Initiative, as he fears that the free software ideals of freedom and community are threatened by compromising on the FSF's idealistic standards for software freedom. The FSF considers free software to be a subset of open-source software, and Richard Stallman explained that DRM software, for example, can be developed as open source, despite how it restricts its users, and thus does not qualify as free software.

The FSF said that the term open source fosters an ambiguity of a different kind such that it confuses the mere availability of the source with the freedom to use, modify, and redistribute it. On the other hand, the term free software was criticized for the ambiguity of the word free, which was seen as discouraging for business adoption, and for the historical ambiguous usage of the term.

Developers have used the alternative terms Free and Open Source Software (FOSS), or Free/Libre and Open Source Software (FLOSS), consequently, to describe open-source software that is also free software.

Source-available software

Software can be distributed with source code, which is a code that is readable. Software is source available when this source code is available to be seen. However to be source available or FOSS, the source code does not need to be accessible to all, just the users of that software. While all FOSS software is source available because this is a requirement made by the Open Source Definition, not all source available software is FOSS. For example, if the software does not meet other aspects of the Open Source Definition such as permitted modification or redistribution, even if the source code is available, the software is not FOSS.

Open-sourcing

A recent trend within software companies is open-sourcing, or transitioning their previous proprietary software into open-source software through releasing it under an open-source license. Examples of companies who have done this are Google, Microsoft and Apple. Additionally, open-sourcing can refer to programming open-source software or installing open-source software. Open-sourcing can be beneficial in multiple ways, such as attracting more external contributors who bring new perspectives and problem solving capabilities. The downsides of open-sourcing include the work that has to be done to maintaining the new community, such as making the base code easily understandable, setting up communication channels for new developers and creating documentation to allow new developers to easily join. However, a review of several open-sourced projects found that although a newly open-sourced project attracts many newcomers, a great amount are likely to soon leave the project and their forks are also likely to not be impactful.

Other

Other concepts that may share some similarities to open source are shareware, public domain software, freeware, and software viewers/readers that are freely available but do not provide source code. However, these differ from open source software in access to source code, licensing, copyright and fees.

Society and culture

Demographics

Despite being able to collaborate internationally, open source software contributors were found to mostly be located in large clusters such as Silicon Valley that largely collaborate within themselves. Possible reasons for this phenomenon may be that the OSS contributor demographic largely works in software, meaning that the OSS geographic location is closely related to that dispersion and collaborations could be encouraged through work and social networks. Code acceptance can be impacted by status within these social network clusters, creating unfair predispositions in code acceptance based on location. Barriers to international collaboration also include linguistic or cultural differences. Furthermore, each country has been shown to have a higher acceptance rate for code from contributors within their country except India, indicating a bias for culturally similar collaborators.

In 2021, the countries with the highest open source software contributions included the United States, China, Germany, India, and the UK, in that order. The countries with the highest OSS developers per capita from a study in 2021 include, in order, Iceland, Switzerland, Norway, Sweden, and Finland, while in 2008 the countries with top amount of estimated contributors in SourceForge were the United States, Germany, United Kingdom, Canada and France. Though there have been several studies done on the distribution and contributions of OSS developers, this is still an open field that can be measured in several different ways. For instance, Information and communication technology participation, population, wealth and proportion of access to the internet have been shown to be correlated with OSS contributions.

Although gender diversity has been found to enhance team productivity, women still face biases while contributing to open source software projects when their gender is identifiable. In 2002, only 1.5% of international open-source software developers were women, while women made up 28% of tech industry roles, demonstrating their low representation in the software field. Despite OSS contributions having no prerequisites, this gender bias may continue to exist due to the common belief of contributors that gender should not matter, and the quality of code should be the only consideration for code acceptance, preventing the community from addressing the systemic disparities in female representation. However, a more recent figure of female OSS participation internationally calculated across 2005 to 2021 is 9.8%, with most being recent contributors, indicating that female participation may be growing.

Motivations

There are many motivations for contributing to the OSS community. For one, it is an opportunity to learn and practice multiple skills such as coding and other technology related abilities, but also fundamental skills such as communication and collaboration and practical skills needed to excel in technology related fields such as issue tracking or version control. Instead of learning through a classroom or a job, learning through contributing to OSS allows participants to learn at their own pace and follow what interests them. When contributing to OSS, the contributor can learn the current industry best practices, technology and trends and even have the opportunity to contribute to the next big innovation as OSS grows increasingly popular within the tech field. Contributing to OSS without payment means there is no threat of being fired, though reputations can take a hit. On the other hand, a huge motivation to contribute to OSS is the reputation gained as one grows one's public portfolio.

Disparities

Even though programming was originally seen as a female profession, there remains a large gap in computing. Social identity tends to be a large concern as women in the tech industry face insecurity about attracting unwanted male attention and harassment or being unfeminine in their technology knowledge, having a large impact on confidence. Some male tech participants make clear that they believe women fitting in within the culture is impossible, furthering the insecurity for women and their place in the tech industry. Additionally, even in a voluntary contribution environment like open source software, women tend to end up doing the less technical aspects of projects, such as manual testing or documentation despite women and men showing the same productivity in OSS contributions. Explicit biases include longer feedback time, more scrutinization of code and lower acceptance rate of code. Specifically in the open-source software community, women report that sexually offensive language is common and the women's identity as female is given more attention that as an OSS contributor Bias is hard to address due to the belief that gender should not matter, with most contributors feeling that women getting special treatment is unfair and success should be dependent on skill, preventing any changes to be more inclusive.

Adoption and application

Key projects

Open source software projects are built and maintained by a network of programmers, who may often be volunteers, and are widely used in free as well as commercial products.

  • Unix: Unix is an operating system created by AT&T that began as a precursor to open source software in that the free and open-source software revolution began when developers began trying to create operating systems without Unix code. Unix was created in the 1960s, before the commercialization of software and before the concept of open source software was necessary, therefore it was not considered a true open source software project. It started as a research project before being commercialized in the mid 1980s. Before its commercialization, it represented many of the ideals held by the Free and Open source software revolution, including the decentralized collaboration of global users, rolling releases and a community culture of distaste towards proprietary software.
  • BSD: Berkeley Software Distribution (BSD) is an operating system that began as a variant of Unix in 1978 that mixed Unix code with code from Berkeley labs to increase functionality. As BSD was focused on increasing functionality, it would publicly share its greatest innovations with the main Unix operating system. This is an example of the free public code sharing that is a central characteristic of FOSS today. As Unix became commercialized in the 1980s, developers or members of the community who did not support proprietary software began to focus on BSD and turning it into an operating system that did not include any of Unix's code. The final version of BSD was released in 1995.
  • GNU: GNU is a free operating system created by Richard Stallman in 1984 with its name meaning Gnu's Not Unix. The idea was to create a Unix alternative operating system that would be available for anyone to use and allow programmers to share code freely between them. However, the goal of GNU was not to only replace Unix, but to make a superior version that had more technological capabilities. It was released before the philosophical beliefs of the Free and Open source software revolution were truly defined. Because of its creation by prominent FOSS programmer Richard Stallman, GNU was heavily involved in FOSS activism, with one of the greatest achievements of GNU being the creation of the GNU General Public License or GPL, which allowed developers to release software that could be legally shared and modified.
  • Linux: Linux is an operating system kernel that was introduced in 1991 by Linus Torvalds. Linux was inspired by making a better version of the for profit operating service Minix. It was radically different than what other hackers were producing at the time due to it being totally free of cost and being decentralized. Later, Linux was put under the GPL license, allowing people to make money with Linux and bringing Linux into the FOSS community.
  • Apache: Apache began in 1995 as a collaboration between a group of developers releasing their own web server due to their frustration with NCSA HTTPd code base. The name Apache was used because of the several patches they applied to this code base. Within a year of its release, it became the worldwide leading web server. Soon, Apache came out with its own license, creating discord in the greater FOSS community, though ultimately proving successful. The Apache license allowed permitted members to directly access source code, a marked difference from GNU and Linux's approaches.

Extensions for non-software use

While the term open source applied originally only to the source code of software, it is now being applied to many other areas such as open-source ecology, a movement to decentralize technologies so that any human can use them. However, it is often misapplied to other areas that have different and competing principles, which overlap only partially.

The same principles that underlie open-source software can be found in many other ventures, such as open source, open content, and open collaboration.

This "culture" or ideology takes the view that the principles apply more generally to facilitate concurrent input of different agendas, approaches, and priorities, in contrast with more centralized models of development such as those typically used in commercial companies.

Value

More than 90 percent of companies use open-source software as a component of their proprietary software. The decision to use open-source software, or even engage with open-source projects to improve existing open-source software, is typically a pragmatic business decision. When proprietary software is in direct competition with an open-source alternative, research has found conflicting results on the effect of the competition on the proprietary product's price and quality.

For decades, some companies have made servicing of an open-source software product for enterprise users their business model. These companies control an open-source software product, and instead of charging for licensing or use, charge for improvements, integration, and other servicing. Software as a service (SaaS) products based on open-source components are increasingly common.

Open-source software is preferred for scientific applications, because it increases transparency and aids in the validation and acceptance of scientific results.

Group mind (science fiction)

From Wikipedia, the free encyclopedia
 

A group mind, group ego, hive mind, mind coalescence, or gestalt intelligence in science fiction is a plot device in which multiple minds, or consciousnesses, are linked into a single collective consciousness or intelligence.

Overview

"Hive mind" tends to describe a group mind in which the linked individuals have no identity or free will and are possessed or mind-controlled as extensions of the hive mind. It is frequently associated with the concept of an entity that spreads among individuals and suppresses or subsumes their consciousness in the process of integrating them into its own collective consciousness. The concept of the group or hive mind is an intelligent version of real-life superorganisms such as beehives or ant colonies.

The first alien hive society was depicted in H. G. Wells's The First Men in the Moon (1901) while the use of human hive minds in literature goes back at least as far as David H. Keller's The Human Termites (published in Wonder Stories in 1929) and Olaf Stapledon's science-fiction novel Last and First Men (1930), which is the first known use of the term "group mind" in science fiction. The phrase "hive mind" in science fiction has been traced to Edmond Hamilton's novel The Face of the Deep (published in Captain Future in 1942) referring to the hive mind of bees as a simile, then James H. Schmitz's Second Night of Summer (1950). A group mind might be formed by any fictional plot device that facilitates brain to brain communication, such as telepathy.

Some hive minds feature members that are controlled by a centralised "hive brain," "hive queen," or "overmind," but others feature a decentralised approach in which members interact equally or roughly equally to come to decisions. The packs of Tines in Vernor Vinge's A Fire Upon the Deep and The Children of the Sky are an example of such decentralized group minds.

Hive minds are typically viewed in a negative light, especially in earlier works, but some newer works portray them as neutral or positive.

As conceived in speculative fiction, hive minds often imply (almost) complete loss (or lack) of individuality, identity, and personhood. However, while the individual members of a group mind may not have such things, the group mind as a whole will have them, possibly even to a greater degree than individual people (just like a human has more personhood than a single neuron cell). The individuals forming the hive may specialize in different functions, similarly to social insects.[citation needed]

Examples

In literature
Year Source Summary
1901 The First Men in the Moon by H. G. Wells Main characters of the novel go to the Moon and encounter society of Selintiles that exhibit properties of a group mind
1929 The Human Termites by David H. Keller A scientist discovers that termites are all parts of a bigger organism, all controlled by thousands years old central intelligences that plan to conquer the world.
1951 The Puppet Masters by Robert A. Heinlein Slug-like parasitic aliens capable of controlling any organisms they attach themselves to spread among people as they adapt measures of exterminating them
1960 Meeting of the Minds by Robert Sheckley A scorpion-like alien named Quedak has a mission to unify diverse sentient beings into a single collective consciousness.
1987 The Tommyknockers by Stephen King An alien spacecraft gradually transforms the residents of a small Maine town into advanced, but soulless, beings who use alien technology powered by a form of collective, mental or psychic energy.
 
In movies
Year Source Summary
1994 The Puppet Masters movie adaptation of the novel The Puppet Masters
2009 Eyeborgs The network of camera-robots ODIN is filming everything and attacking anyone they deem a threat to their surveillance state but then fabricate video-evidence of something totally different taking place to cover the trails
 
In television
Year Source Summary
1989 Star Trek: The Next Generation episode "Q Who" Introduced a recurring antagonist of the series hive-mind alien group Borg
2015 Rick and Morty episode "Auto Erotic Assimilation" Introduced a hivemind character Unity
2016 Stranger Things seasons 2–5 Introduced a hive mind that connects together all "flayed" creatures and humans
2021 Inside Job episode "Reagan & Mychelle's Hive School Reunion" Based on Myc, a sentient mushroom-like being, going back to his "hive school" reunion, where students were part of a hive mind
2025 Pluribus The plot of the series revolves around a hivemind that encompasses all people on Earth except for a few individuals

Scattering

From Wikipedia, the free encyclopedia
A wine glass in an LCD projector's light beam makes the beam scatter.

In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena.

Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors.

The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory.

Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg.

Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path.

Single and multiple scattering

Zodiacal light is a faint, diffuse glow visible in the night sky. The phenomenon stems from the scattering of sunlight by interplanetary dust spread throughout the plane of the Solar System.

When radiation is only scattered by one localized scattering center, this is called single scattering. It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory.

Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions.

With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusersCoherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization.

Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft.

Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately.

The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality.

Theory

Scattering theory is a framework for studying and understanding the scattering of waves and particles. Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance sunlight scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future".

The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.

Attenuation due to scattering

Equivalent quantities used in the theory of scattering from composite specimens, but with a variety of units

When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time (), i.e. that

where Q is an interaction coefficient and x is the distance traveled in the target.

The above ordinary first-order differential equation has solutions of the form:

where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λησρ/τ, as shown in the figure at left.

In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path (e.g. λ in nanometers) is often discussed[15] instead.

Elastic and inelastic scattering

The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles.

The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process.

The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics.

Mathematical framework

In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future".

Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together.

An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.

Theoretical physics

Top: the real part of a plane wave travelling upwards. Bottom: The real part of the field after inserting in the path of the plane wave a small transparent disk of index of refraction higher than the index of the surrounding medium. This object scatters part of the wave field, although at any individual point, the wave's frequency and wavelength remain intact.

In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.

In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.

Electromagnetics

A Feynman diagram of scattering between two electrons by emission of a virtual photon

Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering.

Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone.

Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006).

Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as: where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α, these domains are:

  • α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light);
  • α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres);
  • α ≫ 1: geometric scattering (particle much larger than wavelength of light).

Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh, from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength (λ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/λ4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere. The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization, angle, and coherence.

For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie, and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids. Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes.

Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift, which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar. This shift involves a slight change in energy.

At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy.

For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer.

Electrophoresis involves the migration of macromolecules under the influence of an electric field. Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move.

Molecular engineering

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Molecular_engineering

Molecular engineering is an emerging field of study concerned with the design and testing of molecular properties, behavior and interactions in order to assemble better materials, systems, and processes for specific functions. This approach, in which observable properties of a macroscopic system are influenced by direct alteration of a molecular structure, falls into the broader category of “bottom-up” design. This field is utmost relevant to Cheminformatics, when related to the research in the Computational Sciences.

Molecular engineering deals with material development efforts in emerging technologies that require rigorous rational molecular design approaches towards systems of high complexity.

Molecular engineering is highly interdisciplinary by nature, encompassing aspects of chemical engineering, materials science, bioengineering, electrical engineering, physics, mechanical engineering, and chemistry. There is also considerable overlap with nanotechnology, in that both are concerned with the behavior of materials on the scale of nanometers or smaller. Given the highly fundamental nature of molecular interactions, there are a plethora of potential application areas, limited perhaps only by one's imagination and the laws of physics. However, some of the early successes of molecular engineering have come in the fields of immunotherapy, synthetic biology, and printable electronics (see molecular engineering applications).

Molecular engineering is a dynamic and evolving field with complex target problems; breakthroughs require sophisticated and creative engineers who are conversant across disciplines. A rational engineering methodology that is based on molecular principles is in contrast to the widespread trial-and-error approaches common throughout engineering disciplines. Rather than relying on well-described but poorly-understood empirical correlations between the makeup of a system and its properties, a molecular design approach seeks to manipulate system properties directly using an understanding of their chemical and physical origins. This often gives rise to fundamentally new materials and systems, which are required to address outstanding needs in numerous fields, from energy to healthcare to electronics. Additionally, with the increased sophistication of technology, trial-and-error approaches are often costly and difficult, as it may be difficult to account for all relevant dependencies among variables in a complex system. Molecular engineering efforts may include computational tools, experimental methods, or a combination of both.

History

Molecular engineering was first mentioned in the research literature in 1956 by Arthur R. von Hippel, who defined it as "… a new mode of thinking about engineering problems. Instead of taking prefabricated materials and trying to devise engineering applications consistent with their macroscopic properties, one builds materials from their atoms and molecules for the purpose at hand." This concept was echoed in Richard Feynman's seminal 1959 lecture There's Plenty of Room at the Bottom, which is widely regarded as giving birth to some of the fundamental ideas of the field of nanotechnology. In spite of the early introduction of these concepts, it was not until the mid-1980s with the publication of Engines of Creation: The Coming Era of Nanotechnology by Drexler that the modern concepts of nano and molecular-scale science began to grow in the public consciousness.

The discovery of electrically conductive properties in polyacetylene by Alan J. Heeger in 1977 effectively opened the field of organic electronics, which has proved foundational for many molecular engineering efforts. Design and optimization of these materials has led to a number of innovations including organic light-emitting diodes and flexible solar cells.

Applications

Molecular design has been an important element of many disciplines in academia, including bioengineering, chemical engineering, electrical engineering, materials science, mechanical engineering and chemistry. However, one of the ongoing challenges is in bringing together the critical mass of manpower amongst disciplines to span the realm from design theory to materials production, and from device design to product development. Thus, while the concept of rational engineering of technology from the bottom-up is not new, it is still far from being widely translated into R&D efforts.

Molecular engineering is used in many industries. Some applications of technologies where molecular engineering plays a critical role:

Consumer Products

Environmental Engineering

  • Water desalination (e.g. new membranes for highly-efficient low-cost ion removal)
  • Soil remediation (e.g. catalytic nanoparticles that accelerate the degradation of long-lived soil contaminants such as chlorinated organic compounds)
  • Carbon sequestration (e.g. new materials for CO2 adsorption)
  • CRISPR - Faster and more efficient gene editing technique
  • Gene delivery/gene therapy - Designing molecules to deliver modified or new genes into cells of live organisms to cure genetic disorders
  • Metabolic engineering - Modifying metabolism of organisms to optimize production of chemicals (e.g. synthetic genomics)
  • Protein engineering - Altering structure of existing proteins to enable specific new functions, or the creation of fully artificial proteins
  • DNA-functionalized materials - 3D assemblies of DNA-conjugated nanoparticle lattices

Techniques and instruments used

Molecular engineers utilize sophisticated tools and instruments to make and analyze the interactions of molecules and the surfaces of materials at the molecular and nano-scale. The complexity of molecules being introduced at the surface is increasing, and the techniques used to analyze surface characteristics at the molecular level are ever-changing and improving. Meantime, advancements in high performance computing have greatly expanded the use of computer simulation in the study of molecular scale systems.

Computational and Theoretical Approaches

An EMSL scientist using the environmental transmission electron microscope at Pacific Northwest National Laboratory. The ETEM provides in situ capabilities that enable atomic-resolution imaging and spectroscopic studies of materials under dynamic operating conditions. In contrast to traditional operation of TEM under high vacuum, EMSL's ETEM uniquely allows imaging within high-temperature and gas environments.

Microscopy

Molecular Characterization

Spectroscopy

Surface Science

Synthetic Methods

Other Tools

Research / Education

At least three universities offer graduate degrees dedicated to molecular engineering: the University of Chicago, the University of Washington, and Kyoto University. These programs are interdisciplinary institutes with faculty from several research areas.

The academic journal Molecular Systems Design & Engineering publishes research from a wide variety of subject areas that demonstrates "a molecular design or optimisation strategy targeting specific systems functionality and performance."

Biomining

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Biominin...