Search This Blog

Saturday, April 4, 2026

Chemical revolution

From Wikipedia, the free encyclopedia
Geoffroy's 1718 Affinity Table: at the head of each column is a chemical species with which all the species below can combine. Some historians have defined this table as being the start of the chemical revolution.

In the history of chemistry, the chemical revolution, also called the first chemical revolution, was the reformulation of chemistry during the seventeenth and eighteenth centuries, which culminated in the law of conservation of mass and the oxygen theory of combustion.

During the 19th and 20th century, this transformation was credited to the work of the French chemist Antoine Lavoisier (the "father of modern chemistry"). However, recent work on the history of early modern chemistry considers the chemical revolution to consist of gradual changes in chemical theory and practice that emerged over a period of two centuries. The so-called Scientific Revolution took place during the sixteenth and seventeenth centuries whereas the chemical revolution took place during the seventeenth and eighteenth centuries.

Primary factors

Several factors led to the first chemical revolution. First, there were the forms of gravimetric analysis that emerged from alchemy and new kinds of instruments that were developed in medical and industrial contexts. In these settings, chemists increasingly challenged hypotheses that had already been presented by the ancient Greeks. For example, chemists began to assert that all structures were composed of more than the four elements of the Greeks or the eight elements of the medieval alchemists. The Irish alchemist, Robert Boyle, laid the foundations for the Chemical Revolution, with his mechanical corpuscular philosophy, which in turn relied heavily on the alchemical corpuscular theory and experimental method dating back to pseudo-Geber.

Earlier works by chemists such as Jan Baptist van Helmont helped to shift the belief in theory that air existed as a single element to that of one in which air existed as a composition of a mixture of distinct kinds of gasses. Van Helmont's data analysis also suggests that he had a general understanding of the law of conservation of mass in the 17th century. Furthermore, work by Jean Rey in the early 17th century with metals like tin and lead and their oxidation in the presence of air and water helped pinpoint the contribution and existence of oxygen in the oxidation process.

Other factors included new experimental techniques and the discovery of 'fixed air' (carbon dioxide) by Joseph Black in the middle of the 18th century. This discovery was particularly important because it empirically proved that 'air' did not consist of only one substance and because it established 'gas' as an important experimental substance. Near to the end of the 18th century, the experiments by Henry Cavendish and Joseph Priestley further proved that air is not an element and is instead composed of several different gases. Lavoisier also translated the names of chemical substance into a new nomenclatural language more appealing to scientists of the nineteenth century. Such changes took place in an atmosphere in which the Industrial Revolution increased public interest in learning and practicing chemistry. When describing the task of reinventing chemical nomenclature, Lavoisier attempted to harness the new centrality of chemistry by making the rather hyperbolic claim that:

We must clean house thoroughly, for they have made use of an enigmatical language peculiar to themselves, which in general presents one meaning for the adepts and another meaning for the vulgar, and at the same time contains nothing that is rationally intelligible either for the one or for the other.

Precision instruments

Much of the reasoning behind Antoine Lavoisier being named the "father of modern chemistry" and the start of the chemical revolution lay in his ability to mathematize the field, pushing chemistry to use the experimental methods utilized in other "more exact sciences." Lavoisier changed the field of chemistry by keeping meticulous balance sheets in his research, attempting to show that through the transformation of chemical species the total amount of substance was conserved. Lavoisier used instrumentation for thermometric and barometric measurements in his experiments, and collaborated with Pierre Simon de Laplace in the invention of the calorimeter, an instrument for measuring heat changes in a reaction.

In attempting to dismantle phlogiston theory and implement his own theory of combustion, Lavoisier utilized multiple apparatuses. These included a red-hot iron gun barrel which was designed to have water run through it and decompose, and an alteration of the apparatus which implemented a pneumatic trough at one end, a thermometer, and a barometer. The precision of his measurements was a requirement in convincing opposition of his theories about water as a compound, with instrumentation designed by himself implemented in his research.

Despite having precise measurements for his work, Lavoisier faced a large amount of opposition in his research. Proponents of phlogiston theory, such as Keir and Priestley, claimed that demonstration of facts was only applicable for raw phenomena, and that interpretation of these facts did not imply accuracy in theories. They stated that Lavoisier was attempting to impose order on observed phenomena, whereas a secondary source of validity would be required to give definitive proof of the composition of water and non-existence of phlogiston.

Antoine Lavoisier

The latter stages of the revolution was fuelled by the 1789 publication of Lavoisier's Traité Élémentaire de Chimie (Elements of Chemistry). Beginning with this publication and others to follow, Lavoisier synthesised the work of others and coined the term "oxygen". Antoine Lavoisier represented the chemical revolution not only in his publications, but also in the way he practiced chemistry. Lavoisier's work was characterized by his systematic determination of weights and his strong emphasis on precision and accuracy. While it has been postulated that the law of conservation of mass was discovered by Lavoisier, this claim has been refuted by scientist Marcellin Berthelot.

Earlier use of the law of conservation of mass has been suggested by Henry Guerlac, noting that scientist Jan Baptist van Helmont had implicitly applied the methodology to his work in the 16th and 17th centuries. Earlier references of the law of conservation of mass and its use were made by Jean Rey in 1630. Although the law of conservation of mass was not explicitly discovered by Lavoisier, his work with a wider array of materials than what most scientists had available at the time allowed his work to greatly expand the boundaries of the principle and its fundamentals.

Lavoisier also contributed to chemistry a method of understanding combustion and respiration and proof of the composition of water by decomposition into its constituent parts. He explained the theory of combustion, and challenged the phlogiston theory with his views on caloric. The Traité incorporates notions of a "new chemistry" and describes the experiments and reasoning that led to his conclusions. Like Newton's Principia, which was the high point of the Scientific Revolution, Lavoisier's Traité can be seen as the culmination of the Chemical Revolution.

Lavoisier's work was not immediately accepted and it took several decades for it gain momentum. This transition was aided by the work of Jöns Jakob Berzelius, who came up with a simplified shorthand to describe chemical compounds based on John Dalton's theory of atomic weights. Many people credit Lavoisier and his overthrow of phlogiston theory as the traditional chemical revolution, with Lavoisier marking the beginning of the revolution and John Dalton marking its culmination.

Méthode de nomenclature chimique

Antoine Lavoisier, in a collaborative effort with Louis Bernard Guyton de Morveau, Claude Louis Berthollet, and Antoine François de Fourcroy, published Méthode de nomenclature chimique in 1787. This work established a terminology for the "new chemistry" which Lavoisier was creating, which focused on a standardized set of terms, establishment of new elements, and experimental work. Méthode established 55 elements which were substances that could not be broken down into simpler composite parts at the time of publishing. By introducing new terminology into the field, Lavoisier encouraged other chemists to adopt his theories and practices in order to use his terms and stay current in chemistry.

Traité élémentaire de chimie

One of Lavoisier's main influences was Étienne Bonnet, abbé de Condillac. Condillac's approach to scientific research, which was the basis of Lavoisier's approach in Traité, was to demonstrate that human beings could create a mental representation of the world using gathered evidence. In Lavoisier's preface to Traité, he states

It is a maxim universally admitted in geometry, and indeed in every branch of knowledge, that, in the progress of investigation, we should proceed from known facts to what is unknown. ... In this manner, from a series of sensations, observations, and analyses, a successive train of ideas arises, so linked together, that an attentive observer may trace back to a certain point the order and connection of the whole sum of human knowledge.

Lavoisier clearly ties his ideas in with those of Condillac, seeking to reform the field of chemistry. His goal in Traité was to associate the field with direct experience and observation, rather than assumption. His work defined a new foundation for the basis of chemical ideas and set a direction for the future course of chemistry.

Humphry Davy

Humphry Davy was an English chemist and a professor of chemistry at the London's Royal Institution in the early 1800s. There he performed experiments that cast doubt upon some of Lavoisier's key ideas such as the acidity of oxygen and the idea of a caloric element. Davy was able to show that acidity was not due to the presence of oxygen using muriatic acid (hydrochloric acid) as proof. He also proved that the compound oxymuriatic acid contained no oxygen and was instead an element, which he named chlorine.

Through his use of electric batteries at the Royal Institution Davy first isolated chlorine, followed by the isolation of elemental iodine in 1813. Using the batteries Davy was also able to isolate the elements sodium and potassium. From these experiments Davy concluded that the forces that join chemical elements together must be electrical in nature. Davy also opposed the idea that caloric was an immaterial fluid, arguing instead that heat was a type of motion.

John Dalton

John Dalton was an English chemist who developed the idea of atomic theory of chemical elements. Dalton's atomic theory of chemical elements assumed that each element had unique atoms associated with and specific to that atom. This was in opposition to Lavoisier's definition of elements which was that elements are substances that chemists could not break down further into simpler parts. Dalton's idea also differed from the idea of corpuscular theory of matter, which believed that all atoms were the same, and had been a supported theory since the 17th century.

To help support his idea, Dalton worked on defining the relative weights of atoms in chemicals in his work New System of Chemical Philosophy, published in 1808. His text showed calculations to determine the relative atomic weights of Lavoisier's different elements based on experimental data pertaining to the relative amounts of different elements in chemical combinations. Dalton argued that elements would combine in the simplest form possible. Water was known to be a combination of hydrogen and oxygen, thus Dalton believed water to be a binary compound containing one hydrogen and one oxygen.

Dalton was able to accurately compute the relative quantity of gases in atmospheric air. He used the specific gravity of azotic (nitrogen), oxygenous, carbonic acid (carbon dioxide), and hydrogenous gases as well as aqueous vapor determined by Lavoisier and Davy to determine the proportional weights of each as a percent of a whole volume of atmospheric air. Dalton determined that atmospheric air contains 75.55% azotic gas, 23.32% oxygenous gas, 1.03% aqueous vapor, and 0.10% carbonic acid gas.

Jöns Jacob Berzelius

Jöns Jacob Berzelius was a Swedish chemist who studied medicine at the University of Uppsala and was a professor of chemistry in Stockholm. He drew on the ideas of both Davy and Dalton to create an electrochemical view of how elements combined together. Berzelius classified elements into two groups, electronegative and electropositive depending which pole of a galvanic battery they were released from when decomposed. He created a scale of charge with oxygen being the most electronegative element and potassium the most electropositive. This scale signified that some elements had positive and negative charges associated with them and the position of an element on this scale and the element's charge determined how that element combined with others.

Berzelius's work on electrochemical atomic theory was published in 1818 as Essai sur la théorie des proportions chimiques et sur l'influence chimique de l'électricité. He also introduced a new chemical nomenclature into chemistry by representing elements with letters and abbreviations, such as O for oxygen and Fe for iron. Combinations of elements were represented as sequences of these symbols and the number of atoms were represented at first by superscripts and then later subscripts.

Fourth Industrial Revolution

From Wikipedia, the free encyclopedia
 
Robots in a grocery warehouse
 
Augmented reality information about a painting
 
Illustrated understanding of the Internet of things in battlefield setting
 

The Fourth Industrial Revolution, also known as 4IR, Industry 4.0 or the Intelligence Age, is a neologism describing rapid technological advancement in the 21st century. It follows the Third Industrial Revolution (the "Information Age"). The term was popularized in 2016 by Klaus Schwab, the World Economic Forum founder and former executive chairman, who asserts that these developments represent a significant shift in industrial capitalism.

A part of this phase of industrial change is the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds.

Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scale machine-to-machine communication (M2M), and the Internet of things (IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyse and diagnose issues without the need for human intervention.

It also represents a social, political, and economic shift from the digital age of the late 1990s and early 2000s to an era of embedded connectivity distinguished by the ubiquity of technology in society that changes the ways humans experience and know the world around them. It posits that we have created and are entering an augmented social reality compared to just the natural senses and industrial ability of humans alone. The Fourth Industrial Revolution is sometimes expected to mark the beginning of an imagination age, where creativity and imagination become the primary drivers of economic value.

History

The phrase Fourth Industrial Revolution was first introduced by a team of scientists developing a high-tech strategy for the German government. Klaus Schwab, former executive chairman of the World Economic Forum (WEF), introduced the phrase to a wider audience in a 2015 article published by Foreign Affairs. "Mastering the Fourth Industrial Revolution" was the 2016 theme of the World Economic Forum Annual Meeting, in Davos-Klosters, Switzerland.

On 10 October 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco. This was also subject and title of Schwab's 2016 book. Schwab includes in this fourth era technologies that combine hardware, software, and biology (cyber-physical systems), and emphasizes advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics, artificial intelligence, nanotechnology, quantum computing, biotechnology, the internet of things, the industrial internet of things, decentralized consensus, fifth-generation wireless technologies, 3D printing, and fully autonomous vehicles.

In The Great Reset proposal by the WEF, The Fourth Industrial Revolution is included as a strategic intelligence in the solution to rebuild the economy sustainably following the COVID-19 pandemic.

First Industrial Revolution

The First Industrial Revolution was marked by a transition from hand production methods to machines through the use of steam power and water power. The implementation of new technologies took a long time, so the period which this refers to was between 1760 and 1820, or 1840 in Europe and the United States. Its effects had consequences on textile manufacturing, which was first to adopt such changes, as well as iron industry, agriculture, and mining–although it also had societal effects with an ever stronger middle class.

Second Industrial Revolution

The Second Industrial Revolution, also known as the Technological Revolution, is the period between 1871 and 1914 that resulted from installations of extensive railroad and telegraph networks, which allowed for faster transfer of people and ideas, as well as electricity. Increasing electrification allowed for factories to develop the modern production line.

Third Industrial Revolution

The Third Industrial Revolution, also known as the Digital Revolution, began in the late 20th century. It is characterized by the shift to an economy centered on information technology, marked by the advent of personal computers, the Internet, and the widespread digitalization of communication and industrial processes.

A book by Jeremy Rifkin titled The Third Industrial Revolution, published in 2011, focused on the intersection of digital communications technology and renewable energy. It was made into a 2017 documentary by Vice Media.

Characteristics

In essence, the Fourth Industrial Revolution is the trend towards automation and data exchange in manufacturing technologies and processes which include cyber-physical systems (CPS), internet of things (IoT), cloud computingcognitive computing, and artificial intelligence.

Machines improve human efficiency in performing repetitive functions, and the combination of machine learning and computing power allows machines to carry out increasingly complex tasks.

The Fourth Industrial Revolution has been defined as technological developments in cyber-physical systems such as high capacity connectivity; new human-machine interaction modes such as touch interfaces and virtual reality systems; and improvements in transferring digital instructions to the physical world including robotics and 3D printing (additive manufacturing); "big data" and cloud computing; improvements to and uptake of off-grid: solar, wind, wave, hydroelectric and the electric batteries (lithium-ion renewable energy storage systems and EV).

It also emphasizes decentralized decisions – the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level.

Distinctiveness

Proponents of the Fourth Industrial Revolution suggest it is a distinct revolution rather than simply a prolongation of the Third Industrial Revolution. This is due to the following characteristics:

  • Velocity – exponential speed at which incumbent industries are affected and displaced
  • Scope and systems impact – the large amount of sectors and firms that are affected
  • Paradigm shift in technology policy – new policies designed for this new way of doing are present. An example is Singapore's formal recognition of Industry 4.0 in its innovation policies.

Critics of the concept dismiss Industry 4.0 as a marketing strategy. They suggest that although revolutionary changes are identifiable in distinct sectors, there is no systemic change so far. In addition, the pace of recognition of Industry 4.0 and policy transition varies across countries; the definition of Industry 4.0 is not harmonised. One of the most known figures is Jeremy Rifkin who "agree[s] that digitalization is the hallmark and defining technology in what has become known as the Third Industrial Revolution". However, he argues "that the evolution of digitalization has barely begun to run its course and that its new configuration in the form of the Internet of Things represents the next stage of its development".

Components

Self-driving car

The application of the Fourth Industrial Revolution operates through:

Industry 4.0 networks a wide range of new technologies to create value. Using cyber-physical systems that monitor physical processes, a virtual copy of the physical world can be designed. Characteristics of cyber-physical systems include the ability to make decentralised decisions independently, reaching a high degree of autonomy.

The value created in Industry 4.0 can be relied upon in electronic identification, in which the smart manufacturing requires set technologies to be incorporated in the manufacturing process to thus be classified as in the development path of Industry 4.0 and no longer digitisation.

Smart factories

The Fourth Industrial Revolution fosters "smart factories", which are production environments where facilities and logistics systems are organised with minimal human intervention.

The technical foundations on which smart factories are based are cyber-physical systems that communicate with each other using IoT. An important part of this process is the exchange of data between the product and the production line. This enables more efficient supply chain connectivity and better organisation within a production environment.

Within modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world, and make decentralised decisions. Over the internet of things, cyber-physical systems communicate and cooperate with each other and with humans in synchronic time both internally and across organizational services offered and used by participants of the value chain.

Artificial intelligence

Artificial intelligence (AI) has a wide range of applications across all sectors of the economy. It gained prominence following advancements in deep learning during the 2010s, and its impact intensified in the 2020s with the rise of generative AI, a period often referred to as the "AI boom". Models like GPT-4o can engage in verbal and textual discussions and analyze images.

AI is a key driver of Industry 4.0, orchestrating technologies like robotics, automated vehicles, and real-time data analytics. By enabling machines to perform complex tasks, AI is redefining production processes and reducing changeover times. AI could also significantly accelerate, or even automate software development.

Some experts believe that AI alone could be as transformative as an industrial revolution. Multiple companies such as OpenAI and Meta have expressed the goal of creating artificial general intelligence (AI that can do virtually any cognitive task a human can), making large investments in data centers and GPUs to train more capable AI models.

Robotics

Humanoid robots have traditionally lacked usefulness. They had difficulty picking simple objects due to imprecise control and coordination, and they wouldn't understand their environment and how physics works. They were often explicitly programmed to do narrow tasks, failing when encountering new situations. Modern humanoid robots, however, are typically based on machine learning, and in particular reinforcement learning. In 2024, humanoid robots are rapidly becoming more flexible, easier to train, and versatile.

Predictive maintenance

Industry 4.0 facilitates predictive maintenance, due to the use of advanced technologies, including IoT sensors. Predictive maintenance, which can identify potential maintenance issues in real time, allows machine owners to perform cost-effective maintenance before the machinery fails or gets damaged. For example, a company in Los Angeles could understand if a piece of equipment in Singapore is running at an abnormal speed or temperature. They could then decide whether or not it needs to be repaired.

3D printing

The Fourth Industrial Revolution is said to have extensive dependency on 3D printing technology. Some advantages of 3D printing for industry are that 3D printing can print many geometric structures, as well as simplify the product design process. It is also relatively environmentally friendly. In low-volume production, it can also decrease lead times and total production costs. Moreover, it can increase flexibility, reduce warehousing costs and help the company towards the adoption of a mass customisation business strategy. In addition, 3D printing can be very useful for printing spare parts and installing it locally, therefore reducing supplier dependence and reducing the supply lead time.

Smart sensors

Sensors and instrumentation drive the central forces of innovation, not only for Industry 4.0 but also for other "smart" megatrends, such as smart production, smart mobility, smart homes, smart cities, and smart factories.

Smart sensors are devices which generate the data and allow further functionality from self-monitoring and self-configuration to condition monitoring of complex processes. With the capability of wireless communication, they reduce installation effort to a great extent and help realise a dense array of sensors.

The importance of sensors, measurement science, and smart evaluation for Industry 4.0 has been recognised and acknowledged by various experts and has already led to the statement "Industry 4.0: nothing goes without sensor systems."

However, there are a few issues, such as time synchronisation error, data loss, and dealing with large amounts of harvested data, which all limit the implementation of full-fledged systems. Moreover, additional limits on these functionalities represent the battery power. One example of the integration of smart sensors in the electronic devices, is the case of smart watches, where sensors receive the data from the movement of the user, process the data and as a result, provide the user with the information about how many steps they have walked in a day and also converts the data into calories burned.

Agriculture and food industries

Hydroponic vertical farming

Smart sensors in these two fields are still in the testing stage. These connected sensors collect, interpret and communicate the information available in the plots (leaf area, vegetation index, chlorophyll, hygrometry, temperature, water potential, radiation). Based on this scientific data, the objective is to enable real-time monitoring via a smartphone with a range of advice that optimises plot management in terms of results, time and costs. On the farm, these sensors can be used to detect crop stages and recommend inputs and treatments at the right time, as well as controlling the level of irrigation.

The food industry requires more and more security and transparency and full documentation is required. This new technology is used as a tracking system as well as the collection of human data and product data.

Accelerated transition to the knowledge economy

Knowledge economy is an economic system in which production and services are largely based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence. Industry 4.0 aids transitions into knowledge economy by increasing reliance on intellectual capabilities rather than on physical inputs or natural resources.

Challenges

Challenges in implementation of Industry 4.0:

Economic

  • High economic cost
  • Business model adaptation
  • Unclear economic benefits/excessive investment
  • Driving significant economic changes through automation and technological advancements, leading to both job displacement and the creation of new roles, necessitating widespread workforce reskilling and systemic adaptation.

Social

Political

  • Lack of regulation, standards, and forms of certifications
  • Unclear legal issues and data security

Organizational

  • IT security issues, which are greatly aggravated by the inherent need to open up previously closed production shops
  • Reliability and stability needed for critical machine-to-machine communication (M2M), including very short and stable latency times
  • Need to maintain the integrity of production processes
  • Need to avoid any IT snags, as those would cause expensive production outages
  • Need to protect industrial know-how (contained also in the control files for the industrial automation gear)
  • Lack of adequate skill-sets to expedite the transition towards Industry 4.0
  • Low top management commitment
  • Insufficient qualification of employees

Country applications

Many countries have set up institutional mechanisms to foster the adoption of Industry 4.0 technologies. For example,

Australia

Australia has a Digital Transformation Agency (est. 2015) and the Prime Minister's Industry 4.0 Taskforce (est. 2016), which promotes collaboration with industry groups in Germany and the USA.

Brazil

Brazil's embrace of Industry 4.0 technologies has been a slow and inconsistent process. Initial assessments clearly indicated a considerable gap in digital preparedness among the nation's industrial businesses. A significant survey, conducted by the National Confederation of Industry, revealed concerning statistics: 42% of Brazilian companies were completely unaware of how crucial digital technologies are for industrial competitiveness. Furthermore, a substantial 46% either weren't utilizing these technologies or were unsure about their application. These findings collectively underscored a widespread lack of awareness and readiness for digital transformation across the Brazilian industrial landscape.

Germany

The term "Industrie 4.0", shortened to I4.0 or simply I4, originated in 2011 from a project in the high-tech strategy of the German government and specifically relates to that project policy, rather than a wider notion of a Fourth Industrial Revolution of 4IR, which promotes the computerisation of manufacturing. The term "Industrie 4.0" was publicly introduced in the same year at the Hannover Fair. German professor Wolfgang Wahlster is sometimes called the inventor of the "Industry 4.0" term. In October 2012, the Working Group on Industry 4.0 presented a set of Industry 4.0 implementation recommendations to the German federal government. The workgroup members and partners are recognised as the founding fathers and driving force behind Industry 4.0. On 8 April 2013 at the Hannover Fair, the final report of the Working Group Industry 4.0 was presented. This working group was headed by Siegfried Dais, of Robert Bosch GmbH, and Henning Kagermann, of the German Academy of Science and Engineering.

As Industry 4.0 principles have been applied by companies, they have sometimes been rebranded. For example, the aerospace parts manufacturer Meggitt PLC has branded its own Industry 4.0 research project M4.

In Germany, the impact of the shift to Industry 4.0 (and especially digitisation) on the labour market has been discussed under the heading "Work 4.0".

The federal government in Germany is a leader in the development of the I4.0 policy through its ministries of the German federal Ministry of Education and Research (BMBF) and BMWi. Through the publishing of set objectives and goals for enterprises to achieve, the German federal government attempts to set the direction of the digital transformation. However, there is a gap between German enterprise's collaboration and knowledge of these set policies. The biggest challenge SMEs in Germany are currently facing regarding digital transformation of their manufacturing processes is ensuring that there is a concrete IT and application landscape to support further digital transformation efforts.

The characteristics of the German government's Industry 4.0 strategy involve the strong customisation of products under the conditions of highly flexible (mass-) production. The required automation technology is improved by the introduction of methods of self-optimization, self-configuration, self-diagnosis, cognition and intelligent support of workers in their increasingly complex work. The largest project in Industry 4.0 as of July 2013 is the BMBF leading-edge cluster "Intelligent Technical Systems Ostwestfalen-Lippe (its OWL)". Another major project is the BMBF project RES-COM, as well as the Cluster of Excellence "Integrative Production Technology for High-Wage Countries". In 2015, the European Commission started the international Horizon 2020 research project CREMA (cloud-based rapid elastic manufacturing) as a major initiative to foster the Industry 4.0 topic.

Estonia

In Estonia, the digital transformation dubbed as the 4th Industrial Revolution by Klaus Schwab and the World Economic Forum in 2015 started with the restoration of independence in 1991. Although a latecomer to the information revolution due to 50 years of Soviet occupation, Estonia leapfrogged to the digital era, while skipping the analogue connections almost completely. The early decisions made by Prime Minister Mart Laar on the course of the country's economic development led to the establishment of what is today known as e-Estonia, one of the worlds most digitally advanced nations.

According to the goals set in Estonia's Digital Agenda 2030, the next advances in the country's digital transformation will involve switching to event-based and proactive services, both in private and business environments, as well as developing a green, AI-powered, and human-centric digital government.

Indonesia

Another example is the Indonesian initiative Making Indonesia 4.0, which focuses on improving industrial performance.

India

India, with its expanding economy and extensive manufacturing sector, has embraced the digital revolution, leading to significant advancements in manufacturing. The Indian program for Industry 4.0 centers around leveraging technology to produce globally competitive products at cost-effective rates while adopting the latest technological advancements of Industry 4.0.

Japan

Society 5.0 envisions a society that prioritizes the well-being of its citizens, striking a harmonious balance between economic progress and the effective addressing of societal challenges through a closely interconnected system of both the digital realm and the physical world. This concept was introduced in 2019 in the 5th Science and Technology Basic Plan for Japanese Government as a blueprint for a forthcoming societal framework.

Malaysia

Malaysia's national policy on Industry 4.0 is known as Industry4WRD. Launched in 2018, key initiatives in this policy include enhancing digital infrastructure, equipping the workforce with 4IR skills, and fostering innovation and technology adoption across industries.

South Africa

South Africa appointed a Presidential Commission on the Fourth Industrial Revolution in 2019, consisting of about 30 stakeholders with a background in academia, industry and government. South Africa has also established an Inter ministerial Committee on Industry 4.0.

A nationwide survey of 577 lecturers in Technical Engineering at 52 TVET college campuses across South Africa found that 52.3% of participants were unaware of technological advancements in their area of specialization and their potential impact on technical training. These findings indicate that South African TVET lecturers have limited awareness of the technological advancements needed to participate effectively in the 4IR era. Accordingly, South African Minister of Higher Education, Blade Nzimande, placed the upskilling of TVET lecturers' 4IR-related skills high on the ministry's agenda.

South Korea

The Republic of Korea has had a Presidential Committee on the Fourth Industrial Revolution since 2017. The Republic of Korea's I-Korea strategy (2017) is focusing on new growth engines that include AI, drones, and autonomous cars, in line with the government's innovation-driven economic policy.

Uganda

Uganda adopted its own National 4IR Strategy in October 2020 with emphasis on e-governance, urban management (smart cities), healthcare, education, agriculture, and the digital economy; to support local businesses, the government was contemplating introducing a local start-ups bill in 2020 which would require all accounting officers to exhaust the local market prior to procuring digital solutions from abroad.

United Kingdom

In a policy paper published in 2019, the UK's Department for Business, Energy & Industrial Strategy, titled "Regulation for the Fourth Industrial Revolution", outlined the need to evolve current regulatory models to remain competitive in evolving technological and social settings.

United States

The Department of Homeland Security in 2019 published a paper called 'The Industrial Internet of things (IIOT): Opportunities, Risks, Mitigation'. The base pieces of critical infrastructure are increasingly digitised for greater connectivity and optimisation. Hence, its implementation, growth and maintenance must be carefully planned and safeguarded. The paper discusses not only applications of IIOT but also the associated risks. It has suggested some key areas where risk mitigation is possible. To increase coordination between the public, private, law enforcement, academia and other stakeholders the DHS formed the National Cybersecurity and Communications Integration Center (NCCIC).

Industry applications

The aerospace industry has sometimes been characterised as "too low volume for extensive automation". However, Industry 4.0 principles have been investigated by several aerospace companies, and technologies have been developed to improve productivity where the upfront cost of automation cannot be justified. One example of this is the aerospace parts manufacturer Meggitt PLC's M4 project.

The increasing use of the industrial internet of things is referred to as Industry 4.0 at Bosch, and generally in Germany. Applications include machines that can predict failures and trigger maintenance processes autonomously or self-organised coordination that react to unexpected changes in production. in 2017, Bosch launched the Connectory, a Chicago, Illinois based innovation incubator that specializes in IoT, including Industry 4.0.

Industry 4.0 inspired Innovation 4.0, a move toward digitisation for academia and research and development. In 2017, the £81M Materials Innovation Factory (MIF) at the University of Liverpool opened as a center for computer aided materials science, where robotic formulation, data capture, and modelling are being integrated into development practices.

Criticism

With the consistent development of automation of everyday tasks, some saw the benefit in the exact opposite of automation where self-made products are valued more than those that involved automation. This valuation is named the IKEA effect, a term coined by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale, and Dan Ariely of Duke. Another problem that is expected to accelerate with the growth of IR4 is the prevalence of mental disorders, a known issue within high-tech operators. Also, the IR4 has sparked significant criticism regarding AI bias and ethical issues, as algorithms used in decision-making processes often perpetuate existing social inequalities, disproportionately impacting marginalized groups while lacking transparency and accountability.

Future

Industry 5.0

Industry 5.0 has been proposed as a strategy to create a paradigm shift for an industrial landscape in which the primary focus should no longer be on increasing efficiency, but rather on promoting the well-being of society and sustainability of the economy and industrial production. This shift in production that appears more "human friendly" is the expected outcome of Industry 4.0 (less labor, less facilities, less materials) evolving towards smaller localized flexible JIT manufacturing facilities since long distance transportation and distribution will become the greater remaining costs to reduce.

Unconventional computing

From Wikipedia, the free encyclopedia

Unconventional computing (also known as alternative computing or nonstandard computation) is computing by any of a wide range of new or unusual methods.

The term unconventional computation was coined by Cristian S. Calude and John Casti and used at the First International Conference on Unconventional Models of Computation in 1998.

Background

The general theory of computation allows for a variety of methods of computation. Computing technology was first developed using mechanical systems and then evolved into the use of electronic devices. Other fields of modern physics provide additional avenues for development.

Models of computation

A model of computation describes how the output of a mathematical function is computed given its input. The model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.

A wide variety of models are commonly used; some closely resemble the workings of (idealized) conventional computers, while others do not. Some commonly used models are register machines, random-access machines, Turing machines, lambda calculus, rewriting systems, digital circuits, cellular automata, and Petri nets.

Mechanical computing

Hamann Manus R, a digital mechanical calculator

Historically, mechanical computers were used in industry before the advent of the transistor.

Mechanical computers retain some interest today, both in research and as analogue computers. Some mechanical computers have a theoretical or didactic relevance, such as billiard-ball computers, while hydraulic ones like the MONIAC or the Water integrator were used effectively.

Analog computing

An analog computer is a type of computer that uses analog signals, which are continuous physical quantities, to model and solve problems. These signals can be electrical, mechanical, or hydraulic in nature. Analog computers were widely used in scientific and industrial applications, and were often faster than digital computers at the time. However, they started to become obsolete in the 1950s and 1960s and are now mostly used in specific applications such as aircraft flight simulators and teaching control systems in universities. Examples of analog computing devices include slide rules, nomograms, and complex mechanisms for process control and protective relays. The Antikythera mechanism, a mechanical device that calculates the positions of planets and the Moon, and the planimeter, a mechanical integrator for calculating the area of an arbitrary 2D shape, are also examples of analog computing.

Electronic digital computers

Most modern computers are electronic computers with the Von Neumann architecture based on digital electronics, with extensive integration made possible following the invention of the transistor and the scaling of Moore's law.

Unconventional computing is, (according to website of Center for Nonlinear Studies announcing the conference; Unconventional Computation:Quo Vadis?, March 21–23 2007 in Santa Fe, New Mexico, USA)  "an interdisciplinary research area with the main goal to enrich or go beyond the standard models, such as the Von Neumann computer architecture and the Turing machine, which have dominated computer science for more than half a century". These methods model their computational operations based on nonstandard paradigms, and are currently mostly in the research and development stage.

This computing behavior can be "simulated" using classical silicon-based micro-transistors or solid state computing technologies, but it aims to achieve a new kind of computing.

Generic approaches

These are unintuitive and pedagogical examples that a computer can be made out of almost anything.

Physical objects

An OR gate built from dominoes

A billiard-ball computer is a type of mechanical computer that uses the motion of spherical billiard balls to perform computations. In this model, the wires of a Boolean circuit are represented by paths for the balls to travel on, the presence or absence of a ball on a path encodes the signal on that wire, and gates are simulated by collisions of balls at points where their paths intersect.

A domino computer is a mechanical computer that uses standing dominoes to represent the amplification or logic gating of digital signals. These constructs can be used to demonstrate digital concepts and can even be used to build simple information processing modules.

Both billiard-ball computers and domino computers are examples of unconventional computing methods that use physical objects to perform computation.

Reservoir computing

Reservoir computing is a computational framework derived from recurrent neural network theory that involves mapping input signals into higher-dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. The reservoir, which can be virtual or physical, is made up of individual non-linear units that are connected in recurrent loops, allowing it to store information. Training is performed only at the readout stage, as the reservoir dynamics are fixed, and this framework allows for the use of naturally available systems, both classical and quantum mechanical, to reduce the effective computational cost. One key benefit of reservoir computing is that it allows for a simple and fast learning algorithm, as well as hardware implementation through physical reservoirs.

Tangible computing

SandScape, a tangible computing device installed in the Children's Creativity Museum in San Francisco

Tangible computing refers to the use of physical objects as user interfaces for interacting with digital information. This approach aims to take advantage of the human ability to grasp and manipulate physical objects in order to facilitate collaboration, learning, and design. Characteristics of tangible user interfaces include the coupling of physical representations to underlying digital information and the embodiment of mechanisms for interactive control. There are five defining properties of tangible user interfaces, including the ability to multiplex both input and output in space, concurrent access and manipulation of interface components, strong specific devices, spatially aware computational devices, and spatial reconfigurability of devices.

Human computing

The term "human computer" refers to individuals who perform mathematical calculations manually, often working in teams and following fixed rules. In the past, teams of people were employed to perform long and tedious calculations, and the work was divided to be completed in parallel. The term has also been used more recently to describe individuals with exceptional mental arithmetic skills, also known as mental calculators.

Human–robot interaction

Human–robot interaction

Human–robot interaction, or HRI, is the study of interactions between humans and robots. It involves contributions from fields such as artificial intelligence, robotics, and psychology. Cobots, or collaborative robots, are designed for direct interaction with humans within shared spaces and can be used for a variety of tasks, including information provision, logistics, and unergonomic tasks in industrial environments.

Swarm computing

Swarm robotics is a field of study that focuses on the coordination and control of multiple robots as a system. Inspired by the emergent behavior observed in social insects, swarm robotics involves the use of relatively simple individual rules to produce complex group behaviors through local communication and interaction with the environment. This approach is characterized by the use of large numbers of simple robots and promotes scalability through the use of local communication methods such as radio frequency or infrared.

Physics approaches

Optical computing

Realization of a photonic controlled-NOT gate for use in quantum computing

Optical computing is a type of computing that uses light waves, often produced by lasers or incoherent sources, for data processing, storage, and communication. While this technology has the potential to offer higher bandwidth than traditional computers, which use electrons, optoelectronic devices can consume a significant amount of energy in the process of converting electronic energy to photons and back. All-optical computers aim to eliminate the need for these conversions, leading to reduced electrical power consumption. Applications of optical computing include synthetic-aperture radar and optical correlators, which can be used for object detection, tracking, and classification.

Spintronics

Spintronics is a field of study that involves the use of the intrinsic spin and magnetic moment of electrons in solid-state devices. It differs from traditional electronics in that it exploits the spin of electrons as an additional degree of freedom, which has potential applications in data storage and transfer, as well as quantum and neuromorphic computing. Spintronic systems are often created using dilute magnetic semiconductors and Heusler alloys.

Atomtronics

Atomtronics is a form of computing that involves the use of ultra-cold atoms in coherent matter–wave circuits, which can have components similar to those found in electronic or optical systems.These circuits have potential applications in several fields, including fundamental physics research and the development of practical devices such as sensors and quantum computers.

Fluidics

A flip flop made using fluidics

Fluidics, or fluidic logic, is the use of fluid dynamics to perform analog or digital operations in environments where electronics may be unreliable, such as those exposed to high levels of electromagnetic interference or ionizing radiation. Fluidic devices operate without moving parts and can use nonlinear amplification, similar to transistors in electronic digital logic. Fluidics are also used in nanotechnology and military applications.

Quantum computing

Quantum computing, perhaps the most well-known and developed unconventional computing method, is a type of computation that utilizes the principles of quantum mechanics, such as superposition and entanglement, to perform calculations. Quantum computers use qubits, which are analogous to classical bits but can exist in multiple states simultaneously, to perform operations. While current quantum computers may not yet outperform classical computers in practical applications, they have the potential to solve certain computational problems, such as integer factorization, significantly faster than classical computers. However, there are several challenges to building practical quantum computers, including the difficulty of maintaining qubits' quantum states and the need for error correction. Quantum complexity theory is the study of the computational complexity of problems with respect to quantum computers.

Neuromorphic quantum computing

Neuromorphic Quantum Computing (abbreviated as 'n.quantum computing') is an unconventional type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing.

Both traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don't follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing and quantum computing share similar physical properties during computation.

A quantum computer

Superconducting computing

Superconducting computing is a form of cryogenic computing that utilizes the unique properties of superconductors, including zero resistance wires and ultrafast switching, to encode, process, and transport data using single flux quanta. It is often used in quantum computing and requires cooling to cryogenic temperatures for operation.

Microelectromechanical systems

Microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS) are technologies that involve the use of microscopic devices with moving parts, ranging in size from micrometers to nanometers. These devices typically consist of a central processing unit (such as an integrated circuit) and several components that interact with their surroundings, such as sensors. MEMS and NEMS technology differ from molecular nanotechnology or molecular electronics in that they also consider factors such as surface chemistry and the effects of ambient electromagnetism and fluid dynamics. Applications of these technologies include accelerometers and sensors for detecting chemical substances.

Chemistry approaches

Graphical representation of a rotaxane, useful as a molecular switch

Molecular computing

Molecular computing is an unconventional form of computing that utilizes chemical reactions to perform computations. Data is represented by variations in chemical concentrations, and the goal of this type of computing is to use the smallest stable structures, such as single molecules, as electronic components. This field, also known as chemical computing or reaction-diffusion computing, is distinct from the related fields of conductive polymers and organic electronics, which use molecules to affect the bulk properties of materials.

Biochemistry approaches

Peptide computing

Peptide computing is a computational model that uses peptides and antibodies to solve NP-complete problems and has been shown to be computationally universal. It offers advantages over DNA computing, such as a larger number of building blocks and more flexible interactions, but has not yet been practically realized due to the limited availability of specific monoclonal antibodies.

DNA computing

DNA computing is a branch of unconventional computing that uses DNA and molecular biology hardware to perform calculations. It is a form of parallel computing that can solve certain specialized problems faster and more efficiently than traditional electronic computers. While DNA computing does not provide any new capabilities in terms of computability theory, it can perform a high number of parallel computations simultaneously. However, DNA computing has slower processing speeds, and it is more difficult to analyze the results compared to digital computers.

Membrane computing

Nine Region Membrane Computer

Membrane computing, also known as P systems, is a subfield of computer science that studies distributed and parallel computing models based on the structure and function of biological membranes. In these systems, objects such as symbols or strings are processed within compartments defined by membranes, and the communication between compartments and with the external environment plays a critical role in the computation. P systems are hierarchical and can be represented graphically, with rules governing the production, consumption, and movement of objects within and between regions. While these systems have largely remained theoretical, some have been shown to have the potential to solve NP-complete problems and have been proposed as hardware implementations for unconventional computing.

Biological approaches

Biological computing, also known as bio-inspired computing or natural computation, is the study of using models inspired by biology to solve computer science problems, particularly in the fields of artificial intelligence and machine learning. It encompasses a range of computational paradigms including artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, and more, which can be implemented using traditional electronic hardware or alternative physical media such as biomolecules or trapped-ion quantum computing devices. It also includes the study of understanding biological systems through engineering semi-synthetic organisms and viewing natural processes as information processing. The concept of the universe itself as a computational mechanism has also been proposed.

Neuroscience

Neuromorphic computing involves using electronic circuits to mimic the neurobiological architectures found in the human nervous system, with the goal of creating artificial neural systems that are inspired by biological ones. These systems can be implemented using a variety of hardware, such as memristors, spintronic memories, and transistors, and can be trained using a range of software-based approaches, including error backpropagation and canonical learning rules. The field of neuromorphic engineering seeks to understand how the design and structure of artificial neural systems affects their computation, representation of information, adaptability, and overall function, with the ultimate aim of creating systems that exhibit similar properties to those found in nature. Wetware computers, which are composed of living neurons, are a conceptual form of neuromorphic computing that has been explored in limited prototypes. Electron microscopy has already been imaging high-resolution anatomical neural connection diagrams, and semiconductor chip based intracellular recording at scale can generate physical neural connection maps that specify connection types and strengths, and these imaging and recording technologies can inform the neuromorphic system design.

Cellular automata and amorphous computing

Gosper's Glider Gun creating "gliders" in the cellular automaton Conway's Game of Life

Cellular automata are discrete models of computation consisting of a grid of cells in a finite number of states, such as on and off. The state of each cell is determined by a fixed rule based on the states of the cell and its neighbors. There are four primary classifications of cellular automata, ranging from patterns that stabilize into homogeneity to those that become extremely complex and potentially Turing-complete. Amorphous computing refers to the study of computational systems using large numbers of parallel processors with limited computational ability and local interactions, regardless of the physical substrate. Examples of naturally occurring amorphous computation can be found in developmental biology, molecular biology, neural networks, and chemical engineering. The goal of amorphous computation is to understand and engineer novel systems through the characterization of amorphous algorithms as abstractions.

Evolutionary computation

Evolutionary computation is a type of artificial intelligence and soft computing that uses algorithms inspired by biological evolution to find optimized solutions to a wide range of problems. It involves generating an initial set of candidate solutions, stochastically removing less desired solutions, and introducing small random changes to create a new generation. The population of solutions is subjected to natural or artificial selection and mutation, resulting in evolution towards increased fitness according to the chosen fitness function. Evolutionary computation has proven effective in various problem settings and has applications in both computer science and evolutionary biology.

Mathematical approaches

Ternary computing

Ternary computing is a type of computing that uses ternary logic, or base 3, in its calculations rather than the more common binary system. Ternary computers use trits, or ternary digits, which can be defined in several ways, including unbalanced ternary, fractional unbalanced ternary, balanced ternary, and unknown-state logic. Ternary quantum computers use qutrits instead of trits. Ternary computing has largely been replaced by binary computers, but it has been proposed for use in high-speed, low-power consumption devices using the Josephson junction as a balanced ternary memory cell.

Reversible computing

Reversible computing is a type of unconventional computing where the computational process can be reversed to some extent. In order for a computation to be reversible, the relation between states and their successors must be one-to-one, and the process must not result in an increase in physical entropy. Quantum circuits are reversible as long as they do not collapse quantum states, and reversible functions are bijective, meaning they have the same number of inputs as outputs.

Chaos computing

Chaos computing is a type of unconventional computing that utilizes chaotic systems to perform computation. Chaotic systems can be used to create logic gates and can be rapidly switched between different patterns, making them useful for fault-tolerant applications and parallel computing. Chaos computing has been applied to various fields such as meteorology, physiology, and finance.

Stochastic computing

Stochastic computing is a method of computation that represents continuous values as streams of random bits and performs complex operations using simple bit-wise operations on the streams. It can be viewed as a hybrid analog/digital computer and is characterized by its progressive precision property, where the precision of the computation increases as the bit stream is extended. Stochastic computing can be used in iterative systems to achieve faster convergence, but it can also be costly due to the need for random bit stream generation and is vulnerable to failure if the assumption of independent bit streams is not met. It is also limited in its ability to perform certain digital functions.

Electron microscope

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Electron...