Search This Blog

Thursday, November 25, 2021

Xenobiology

From Wikipedia, the free encyclopedia

Xenobiology (XB) is a subfield of synthetic biology, the study of synthesizing and manipulating biological devices and systems. The name "xenobiology" derives from the Greek word xenos, which means "stranger, alien". Xenobiology is a form of biology that is not (yet) familiar to science and is not found in nature. In practice, it describes novel biological systems and biochemistries that differ from the canonical DNARNA-20 amino acid system (see central dogma of molecular biology). For example, instead of DNA or RNA, XB explores nucleic acid analogues, termed xeno nucleic acid (XNA) as information carriers. It also focuses on an expanded genetic code and the incorporation of non-proteinogenic amino acids into proteins.

Difference between xeno-, exo-, and astro-biology

"Astro" means "star" and "exo" means "outside". Both exo- and astrobiology deal with the search for naturally evolved life in the Universe, mostly on other planets in the circumstellar habitable zone. (These are also occasionally referred to as xenobiology.) Whereas astrobiologists are concerned with the detection and analysis of life elsewhere in the Universe, xenobiology attempts to design forms of life with a different biochemistry or different genetic code than on planet Earth.

Aims

  • Xenobiology has the potential to reveal fundamental knowledge about biology and the origin of life. In order to better understand the origin of life, it is necessary to know why life evolved seemingly via an early RNA world to the DNA-RNA-protein system and its nearly universal genetic code. Was it an evolutionary "accident" or were there constraints that ruled out other types of chemistries? By testing alternative biochemical "primordial soups", it is expected to better understand the principles that gave rise to life as we know it.
  • Xenobiology is an approach to develop industrial production systems with novel capabilities by means of enhanced biopolymer engineering and pathogen resistance. The genetic code encodes in all organisms 20 canonical amino acids that are used for protein biosynthesis. In rare cases, special amino acids such as selenocysteine, pyrrolysine or formylmethionine, can be incorporated by the translational apparatus in to proteins of some organisms. By using additional amino acids from among the over 700 known to biochemistry, the capabilities of proteins may be altered to give rise to more efficient catalytical or material functions. The EC-funded project Metacode, for example, aims to incorporate metathesis (a useful catalytical function so far not known in living organisms) into bacterial cells. Another reason why XB could improve production processes lies in the possibility to reduce the risk of virus or bacteriophage contamination in cultivations since XB cells would no longer provide suitable host cells, rendering them more resistant (an approach called semantic containment)
  • Xenobiology offers the option to design a "genetic firewall", a novel biocontainment system, which may help to strengthen and diversify current bio-containment approaches. One concern with traditional genetic engineering and biotechnology is horizontal gene transfer to the environment and possible risks to human health. One major idea in XB is to design alternative genetic codes and biochemistries so that horizontal gene transfer is no longer possible. Additionally alternative biochemistry also allows for new synthetic auxotrophies. The idea is to create an orthogonal biological system that would be incompatible with natural genetic systems.

Scientific approach

In xenobiology, the aim is to design and construct biological systems that differ from their natural counterparts on one or more fundamental levels. Ideally these new-to-nature organisms would be different in every possible biochemical aspect exhibiting a very different genetic code. The long-term goal is to construct a cell that would store its genetic information not in DNA but in an alternative informational polymer consisting of xeno nucleic acids (XNA), different base pairs, using non-canonical amino acids and an altered genetic code. So far cells have been constructed that incorporate only one or two of these features.

Xeno nucleic acids (XNA)

Originally this research on alternative forms of DNA was driven by the question of how life evolved on earth and why RNA and DNA were selected by (chemical) evolution over other possible nucleic acid structures. Two hypotheses for the selection of RNA and DNA as life's backbone are either they are favored under life on Earth's conditions, or they were coincidentally present in pre-life chemistry and continue to be used now. Systematic experimental studies aiming at the diversification of the chemical structure of nucleic acids have resulted in completely novel informational biopolymers. So far a number of XNAs with new chemical backbones or leaving group of the DNA have been synthesized, e.g.: hexose nucleic acid (HNA); threose nucleic acid (TNA), glycol nucleic acid (GNA) cyclohexenyl nucleic acid (CeNA). The incorporation of XNA in a plasmid, involving 3 HNA codons, has been accomplished already in 2003. This XNA is used in vivo (E coli) as template for DNA synthesis. This study, using a binary (G/T) genetic cassette and two non-DNA bases (Hx/U), was extended to CeNA, while GNA seems to be too alien at this moment for the natural biological system to be used as template for DNA synthesis. Extended bases using a natural DNA backbone could, likewise, be transliterated into natural DNA, although to a more limited extent.

Aside being used as extensions to template DNA strands, XNA activity has been tested for use as genetic catalysts. Although proteins are the most common components of cellular enzymatic activity, nucleic acids are also used in the cell to catalyze reactions. A 2015 study found several different kinds of XNA, most notably FANA (2'-fluoroarabino nucleic acids), as well as HNA, CeNA and ANA (arabino nucleic acids) could be used to cleave RNA during post-transcriptional RNA processing acting as XNA enzymes, hence the name XNAzymes. FANA XNAzymes also showed the ability to ligate DNA, RNA and XNA substrates. Although XNAzyme studies are still preliminary, this study was a step in the direction of searching for synthetic circuit components that are more efficient than those containing DNA and RNA counterparts that can regulate DNA, RNA, and their own, XNA, substrates.

Expanding the genetic alphabet

While XNAs have modified backbones, other experiments target the replacement or enlargement of the genetic alphabet of DNA with unnatural base pairs. For example, DNA has been designed that has – instead of the four standard bases A, T, G, and C – six bases A, T, G, C, and the two new ones P and Z (where Z stands for 6-Amino-5-nitro3-(l'-p-D-2'-deoxyribofuranosyl)-2(1H)-pyridone, and P stands for 2-Amino-8-(1-beta-D-2'-deoxyribofuranosyl)imidazo[1,2-a]-1,3,5-triazin-4 (8H)). In a systematic study, Leconte et al. tested the viability of 60 candidate bases (yielding potentially 3600 base pairs) for possible incorporation in the DNA.

In 2002, Hirao et al. developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation toward a genetic code for protein synthesis containing a non-standard amino acid. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription, and afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.

In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, alongside the four naturally occurring nucleotides, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides.

Novel polymerases

Neither the XNA nor the unnatural bases are recognized by natural polymerases. One of the major challenges is to find or create novel types of polymerases that will be able to replicate these new-to-nature constructs. In one case a modified variant of the HIV-reverse transcriptase was found to be able to PCR-amplify an oligonucleotide containing a third type base pair. Pinheiro et al. (2012) demonstrated that the method of polymerase evolution and design successfully led to the storage and recovery of genetic information (of less than 100bp length) from six alternative genetic polymers based on simple nucleic acid architectures not found in nature, xeno nucleic acids.

Genetic code engineering

One of the goals of xenobiology is to rewrite the genetic code. The most promising approach to change the code is the reassignment of seldom used or even unused codons. In an ideal scenario, the genetic code is expanded by one codon, thus having been liberated from its old function and fully reassigned to a non-canonical amino acid (ncAA) ("code expansion"). As these methods are laborious to implement, and some short cuts can be applied ("code engineering"), for example in bacteria that are auxotrophic for specific amino acids and at some point in the experiment are fed isostructural analogues instead of the canonical amino acids for which they are auxotrophic. In that situation, the canonical amino acid residues in native proteins are substituted with the ncAAs. Even the insertion of multiple different ncAAs into the same protein is possible. Finally, the repertoire of 20 canonical amino acids can not only be expanded, but also reduced to 19. By reassigning transfer RNA (tRNA)/aminoacyl-tRNA synthetase pairs the codon specificity can be changed. Cells endowed with such aminoacyl-[tRNA synthetases] are thus able to read [mRNA] sequences that make no sense to the existing gene expression machinery. Altering the codon: tRNA synthetases pairs may lead to the in vivo incorporation of the non-canonical amino acids into proteins. In the past reassigning codons was mainly done on a limited scale. In 2013, however, Farren Isaacs and George Church at Harvard University reported the replacement of all 321 TAG stop codons present in the genome of E. coli with synonymous TAA codons, thereby demonstrating that massive substitutions can be combined into higher-order strains without lethal effects. Following the success of this genome wide codon replacement, the authors continued and achieved the reprogramming of 13 codons throughout the genome, directly affecting 42 essential genes.

An even more radical change in the genetic code is the change of a triplet codon to a quadruplet and even pentaplet codon pioneered by Sisido in cell-free systems and by Schultz in bacteria. Finally, non-natural base pairs can be used to introduce novel amino acid in proteins.

Directed evolution

The goal of substituting DNA by XNA may also be reached by another route, namely by engineering the environment instead of the genetic modules. This approach has been successfully demonstrated by Marlière and Mutzel with the production of an E. coli strain whose DNA is composed of standard A, C and G nucleotides but has the synthetic thymine analogue 5-chlorouracil instead of thymine (T) in the corresponding positions of the sequence. These cells are then dependent on externally supplied 5-chlorouracil for growth, but otherwise they look and behave as normal E. coli. These cells, however, are currently not yet fully auxotrophic for the Xeno-base since they are still growing on thymine when this is supplied to the medium.

Biosafety

Xenobiological systems are designed to convey orthogonality to natural biological systems. A (still hypothetical) organism that uses XNA, different base pairs and polymerases and has an altered genetic code will hardly be able to interact with natural forms of life on the genetic level. Thus, these xenobiological organisms represent a genetic enclave that cannot exchange information with natural cells. Altering the genetic machinery of the cell leads to semantic containment. In analogy to information processing in IT, this safety concept is termed a “genetic firewall”. The concept of the genetic firewall seems to overcome a number of limitations of previous safety systems. A first experimental evidence of the theoretical concept of the genetic firewall was achieved in 2013 with the construction of a genomically recoded organism (GRO). In this GRO all known UAG stop codons in E.coli were replaced by UAA codons, which allowed for the deletion of release factor 1 and reassignment of UAG translation function. The GRO exhibited increased resistance to T7 bacteriophage, thus showing that alternative genetic codes do reduce genetic compatibility. This GRO, however, is still very similar to its natural “parent” and cannot be regarded as a genetic firewall. The possibility of reassigning the function of large number of triplets opens the perspective to have strains that combine XNA, novel base pairs, new genetic codes, etc. that cannot exchange any information with the natural biological world. Regardless of changes leading to a semantic containment mechanism in new organisms, any novel biochemical systems still has to undergo a toxicological screening. XNA, novel proteins, etc. might represent novel toxins, or have an allergic potential that needs to be assessed.

Governance and regulatory issues

Xenobiology might challenge the regulatory framework, as currently laws and directives deal with genetically modified organisms and do not directly mention chemically or genomically modified organisms. Taking into account that real xenobiology organisms are not expected in the next few years, policy makers do have some time at hand to prepare themselves for an upcoming governance challenge. Since 2012, the following groups have picked up the topic as a developing governance issue: policy advisers in the US, four National Biosafety Boards in Europe, the European Molecular Biology Organisation, and the European Commission's Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in three opinions (Definition, risk assessment methodologies and safety aspects, and risks to the environment and biodiversity related to synthetic biology and research priorities in the field of synthetic biology.).

Mirror matter

From Wikipedia, the free encyclopedia

In physics, mirror matter, also called shadow matter or Alice matter, is a hypothetical counterpart to ordinary matter.

Overview

Modern physics deals with three basic types of spatial symmetry: reflection, rotation, and translation. The known elementary particles respect rotation and translation symmetry but do not respect mirror reflection symmetry (also called P-symmetry or parity). Of the four fundamental interactionselectromagnetism, the strong interaction, the weak interaction, and gravity—only the weak interaction breaks parity.

Parity violation in weak interactions was first postulated by Tsung Dao Lee and Chen Ning Yang in 1956 as a solution to the τ-θ puzzle. They suggested a number of experiments to test if the weak interaction is invariant under parity. These experiments were performed half a year later and they confirmed that the weak interactions of the known particles violate parity.

However, parity symmetry can be restored as a fundamental symmetry of nature if the particle content is enlarged so that every particle has a mirror partner. The theory in its modern form was described in 1991, although the basic idea dates back further. Mirror particles interact amongst themselves in the same way as ordinary particles, except where ordinary particles have left-handed interactions, mirror particles have right-handed interactions. In this way, it turns out that mirror reflection symmetry can exist as an exact symmetry of nature, provided that a "mirror" particle exists for every ordinary particle. Parity can also be spontaneously broken depending on the Higgs potential. While in the case of unbroken parity symmetry the masses of particles are the same as their mirror partners, in case of broken parity symmetry the mirror partners are lighter or heavier.

Mirror matter, if it exists, would need to use the weak force to interact with ordinary matter. This is because the forces between mirror particles are mediated by mirror bosons. With the exception of the graviton, none of the known bosons can be identical to their mirror partners. The only way mirror matter can interact with ordinary matter via forces other than gravity is via kinetic mixing of mirror bosons with ordinary bosons or via the exchange of Holdom particles. These interactions can only be very weak. Mirror particles have therefore been suggested as candidates for the inferred dark matter in the universe.

In another context, mirror matter has been proposed to give rise to an effective Higgs mechanism responsible for the electroweak symmetry breaking. In such a scenario, mirror fermions have masses on the order of 1 TeV since they interact with an additional interaction, while some of the mirror bosons are identical to the ordinary gauge bosons. In order to emphasize the distinction of this model from the ones above, these mirror particles are usually called katoptrons.

Observational effects

Abundance

Mirror matter could have been diluted to unobservably low densities during the inflation epoch. Sheldon Glashow has shown that if at some high energy scale particles exist which interact strongly with both ordinary and mirror particles, radiative corrections will lead to a mixing between photons and mirror photons. This mixing has the effect of giving mirror electric charges a very small ordinary electric charge. Another effect of photon–mirror photon mixing is that it induces oscillations between positronium and mirror positronium. Positronium could then turn into mirror positronium and then decay into mirror photons.

The mixing between photons and mirror photons could be present in tree level Feynman diagrams or arise as a consequence of quantum corrections due to the presence of particles that carry both ordinary and mirror charges. In the latter case, the quantum corrections have to vanish at the one and two loop level Feynman diagrams, otherwise the predicted value of the kinetic mixing parameter would be larger than experimentally allowed.

An experiment to measure this effect is currently being planned.

Dark matter

If mirror matter does exist in large abundances in the universe and if it interacts with ordinary matter via photon-mirror photon mixing, then this could be detected in dark matter direct detection experiments such as DAMA/NaI and its successor DAMA/LIBRA. In fact, it is one of the few dark matter candidates which can explain the positive DAMA/NaI dark matter signal whilst still being consistent with the null results of other dark matter experiments.

Electromagnetic effects

Mirror matter may also be detected in electromagnetic field penetration experiments and there would also be consequences for planetary science and astrophysics.

GZK puzzle

Mirror matter could also be responsible for the GZK puzzle. Topological defects in the mirror sector could produce mirror neutrinos which can oscillate to ordinary neutrinos. Another possible way to evade the GZK bound is via neutron–mirror neutron oscillations.

Gravitational effects

If mirror matter is present in the universe with sufficient abundance then its gravitational effects can be detected. Because mirror matter is analogous to ordinary matter, it is then to be expected that a fraction of the mirror matter exists in the form of mirror galaxies, mirror stars, mirror planets etc. These objects can be detected using gravitational microlensing. One would also expect that some fraction of stars have mirror objects as their companion. In such cases one should be able to detect periodic Doppler shifts in the spectrum of the star. There are some hints that such effects may already have been observed.

 

Open science data

From Wikipedia, the free encyclopedia

Open science data or Open Research Data is a type of open data focused on publishing observations and results of scientific activities available for anyone to analyze and reuse. A major purpose of the drive for open data is to allow the verification of scientific claims, by allowing others to look at the reproducibility of results, and to allow data from many sources to be integrated to give new knowledge. While the idea of open science data has been actively promoted since the 1950s, the rise of the Internet has significantly lowered the cost and time required to publish or obtain data.

History

The concept of open access to scientific data was institutionally established with the formation of the World Data Center system (now the World Data System), in preparation for the International Geophysical Year of 1957–1958. The International Council of Scientific Unions (now the International Council for Science) established several World Data Centers to minimize the risk of data loss and to maximize data accessibility, further recommending in 1955 that data be made available in machine-readable form.

The first initiative to create a database of electronic bibliography of open access data was the Educational Resources Information Center (ERIC) in 1966. In the same year, MEDLINE was created – a free access online database managed by the National Library of Medicine and the National Institute of Health (USA) with bibliographical citations from journals in the biomedical area, which later would be called PubMed, currently with over 14 million complete articles.

In 1995 GCDIS (US) put its position clearly in On the Full and Open Exchange of Scientific Data (A publication of the Committee on Geophysical and Environmental Data - National Research Council):

"The Earth's atmosphere, oceans, and biosphere form an integrated system that transcends national boundaries. To understand the elements of the system, the way they interact, and how they have changed with time, it is necessary to collect and analyze environmental data from all parts of the world. Studies of the global environment require international collaboration for many reasons:

  • to address global issues, it is essential to have global data sets and products derived from these data sets;
  • it is more efficient and cost-effective for each nation to share its data and information than to collect everything it needs independently; and
  • the implementation of effective policies addressing issues of the global environment requires the involvement from the outset of nearly all nations of the world.

International programs for global change research and environmental monitoring crucially depend on the principle of full and open data exchange (i.e., data and information are made available without restriction, on a non-discriminatory basis, for no more than the cost of reproduction and distribution)."

The last phrase highlights the traditional cost of disseminating information by print and post. It is the removal of this cost through the Internet which has made data vastly easier to disseminate technically. It is correspondingly cheaper to create, sell and control many data resources and this has led to the current concerns over non-open data.

More recent uses of the term include:

  • SAFARI 2000 (South Africa, 2001) used a license informed by ICSU and NASA policies
  • The human genome (Kent, 2002)
  • An Open Data Consortium on geospatial data (2003)
  • Manifesto for Open Chemistry (Murray-Rust and Rzepa, 2004) (2004)
  • Presentations to JISC and OAI under the title "open data" (Murray-Rust, 2005)
  • Science Commons launch (2004)
  • First Open Knowledge Forums (London, UK) run by the Open Knowledge Foundation (London UK) on open data in relation to civic information and geodata (February and April 2005)
  • The Blue Obelisk group in chemistry (mantra: Open Data, Open Source, Open Standards) (2005) doi:10.1021/ci050400b
  • The Petition for Open Data in Crystallography is launched by the Crystallography Open Database Advisory Board.(2005)
  • XML Conference & Exposition 2005 (Connolly 2005)
  • SPARC Open Data mailing list (2005)
  • First draft of the Open Knowledge Definition explicitly references "Open Data" (2005)
  • XTech (Dumbill, 2005), (Bray and O'Reilly 2006)

In 2004, the Science Ministers of all nations of the OECD (Organisation for Economic Co-operation and Development), which includes most developed countries of the world, signed a declaration which essentially states that all publicly funded archive data should be made publicly available. Following a request and an intense discussion with data-producing institutions in member states, the OECD published in 2007 the OECD Principles and Guidelines for Access to Research Data from Public Funding as a soft-law recommendation.

In 2005 Edd Dumbill introduced an “Open Data” theme in XTech, including:

In 2006 Science Commons ran a 2-day conference in Washington where the primary topic could be described as Open Data. It was reported that the amount of micro-protection of data (e.g. by license) in areas such as biotechnology was creating a Tragedy of the anticommons. In this the costs of obtaining licenses from a large number of owners made it uneconomic to do research in the area.

In 2007 SPARC and Science Commons announced a consolidation and enhancement of their author addenda.

In 2007 the OECD (Organisation for Economic Co-operation and Development) published the Principles and Guidelines for Access to Research Data from Public Funding. The Principles state that:

Access to research data increases the returns from public investment in this area; reinforces open scientific inquiry; encourages diversity of studies and opinion; promotes new areas of work and enables the exploration of topics not envisioned by the initial investigators.

In 2010 the Panton Principles launched, advocating Open Data in science and setting out for principles to which providers must comply to have their data Open.

In 2011 LinkedScience.org was launched to realize the approach of the Linked Open Science to openly share and interconnect scientific assets like datasets, methods, tools and vocabularies.

In 2012, the Royal Society published a major report, "Science as an Open Enterprise", advocating open scientific data and considering its benefits and requirements.

In 2013 the G8 Science Ministers released a Statement supporting a set of principles for open scientific research data

In 2015 the World Data System of the International Council for Science adopted a new set of Data Sharing Principles to embody the spirit of 'open science'. These Principles are in line with data policies of national and international initiatives and they express core ethical commitments operationalized in the WDS Certification of trusted data repositories and service.

Relation to open access

Much data is made available through scholarly publication, which now attracts intense debate under "Open Access" and semantically open formats – like to offer the scientific articles in JATS format. The Budapest Open Access Initiative (2001) coined this term:

By "open access" to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

The logic of the declaration permits re-use of the data although the term "literature" has connotations of human-readable text and can imply a scholarly publication process. In Open Access discourse the term "full-text" is often used which does not emphasize the data contained within or accompanying the publication.

Some Open Access publishers do not require the authors to assign copyright and the data associated with these publications can normally be regarded as Open Data. Some publishers have Open Access strategies where the publisher requires assignment of the copyright and where it is unclear that the data in publications can be truly regarded as Open Data.

The ALPSP and STM publishers have issued a statement about the desirability of making data freely available:

Publishers recognise that in many disciplines data itself, in various forms, is now a key output of research. Data searching and mining tools permit increasingly sophisticated use of raw data. Of course, journal articles provide one ‘view’ of the significance and interpretation of that data – and conference presentations and informal exchanges may provide other ‘views’ – but data itself is an increasingly important community resource. Science is best advanced by allowing as many scientists as possible to have access to as much prior data as possible; this avoids costly repetition of work, and allows creative new integration and reworking of existing data.

and

We believe that, as a general principle, data sets, the raw data outputs of research, and sets or sub-sets of that data which are submitted with a paper to a journal, should wherever possible be made freely accessible to other scholars. We believe that the best practice for scholarly journal publishers is to separate supporting data from the article itself, and not to require any transfer of or ownership in such data or data sets as a condition of publication of the article in question.

Even though this statement was without any effect on the open availability of primary data related to publications in journals of the ALPSP and STM members. Data tables provided by the authors as supplement with a paper are still available to subscribers only.

Relation to peer review

In an effort to address issues with the reproducibility of research results, some scholars are asking that authors agree to share their raw data as part of the scholarly peer review process. As far back as 1962, for example, a number of psychologists have attempted to obtain raw data sets from other researchers, with mixed results, in order to reanalyze them. A recent attempt resulted in only seven data sets out of fifty requests. The notion of obtaining, let alone requiring, open data as a condition of peer review remains controversial.

Open research computation

To make sense of scientific data they must be analysed. In all but the simplest cases, this is done by software. The extensive use of software poses problems for the reproducibility of research. To keep research reproducible, it is necessary to publish not only all data, but also the source code of all software used, and all the parametrization used in running this software. Presently, these requests are rarely ever met. Ways to come closer to reproducible scientific computation are discussed under the catchword "open research computation".

Open data

From Wikipedia, the free encyclopedia
 
Open data map
 
Linked open data cloud in August 2014
 
Clear labeling of the licensing terms is a key component of open data, and icons like the one pictured here are being used for that purpose.

Open Data is the idea that some data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. The goals of the open-source data movement are similar to those of other "open(-source)" movements such as open-source software, hardware, open content, open specifications, open education, open educational resources, open government, open knowledge, open access, open science, and the open web. Paradoxically, the growth of the open data movement is paralleled by a rise in intellectual property rights. The philosophy behind open data has been long established (for example in the Mertonian tradition of science), but the term "open data" itself is recent, gaining popularity with the rise of the Internet and World Wide Web and, especially, with the launch of open-data government initiatives such as Data.gov, Data.gov.uk and Data.gov.in.

Open data can also be linked data; when it is, it is linked open data. One of the most important forms of open data is open government data (OGD), which is a form of open data created by ruling government institutions. Open government data's importance is borne from it being a part of citizens' everyday lives, down to the most routine/mundane tasks that are seemingly far removed from government.

The abbreviation FAIR/O data is sometimes used to indicate that the dataset or database in question complies with the principles of FAIR data and also carries an explicit data‑capable open license.

Overview

The concept of open data is not new, but a formalized definition is relatively new. Conceptually, open data as a phenomenon denotes that governmental data should be available to anyone with a possibility of redistribution in any form without any copyright restriction. One more definition is the Open Definition which can be summarized in the statement that "A piece of data is open if anyone is free to use, reuse, and redistribute it – subject only, at most, to the requirement to attribute and/or share-alike." Other definitions, including the Open Data Institute's "Open data is data that anyone can access, use or share", have an accessible short version of the definition but refer to the formal definition.

Open data may include non-textual material such as maps, genomes, connectomes, chemical compounds, mathematical and scientific formulae, medical data, and practice, bioscience and biodiversity. Problems often arise because these are commercially valuable or can be aggregated into works of value. Access to, or re-use of, the data is controlled by organisations, both public and private. Control may be through access restrictions, licenses, copyright, patents and charges for access or re-use. Advocates of open data argue that these restrictions are against the common good and that these data should be made available without restriction or fee. In addition, it is important that the data are re-usable without requiring further permission, though the types of re-use (such as the creation of derivative works) may be controlled by a license.

A typical depiction of the need for open data:

Numerous scientists have pointed out the irony that right at the historical moment when we have the technologies to permit worldwide availability and distributed process of scientific data, broadening collaboration and accelerating the pace and depth of discovery ... we are busy locking up that data and preventing the use of correspondingly advanced technologies on knowledge.

— John Wilbanks, VP Science, Creative Commons

Creators of data often do not consider the need to state the conditions of ownership, licensing and re-use; instead presuming that not asserting copyright puts the data into the public domain. For example, many scientists do not regard the published data arising from their work to be theirs to control and consider the act of publication in a journal to be an implicit release of data into the commons. However, the lack of a license makes it difficult to determine the status of a data set and may restrict the use of data offered in an "Open" spirit. Because of this uncertainty it is also possible for public or private organizations to aggregate said data, claim that it is protected by copyright and then resell it.

The issue of indigenous knowledge (IK) poses a great challenge in terms of capturing, storage and distribution. Many societies in third-world countries lack the technicality processes of managing the IK.

At his presentation at the XML 2005 conference, Connolly displayed these two quotations regarding open data:

  • "I want my data back." (Jon Bosak circa 1997)
  • "I've long believed that customers of any application own the data they enter into it." (This quote refers to Veen's own heart-rate data.)

Major sources

The State of Open Data, a 2019 book from African Minds

Open data can come from any source. This section lists some of the fields that publish (or at least discuss publishing) a large amount of open data.

In science

The concept of open access to scientific data was institutionally established with the formation of the World Data Center system, in preparation for the International Geophysical Year of 1957–1958. The International Council of Scientific Unions (now the International Council for Science) oversees several World Data Centres with the mandate to minimize the risk of data loss and to maximize data accessibility.

While the open-science-data movement long predates the Internet, the availability of fast, ubiquitous networking has significantly changed the context of Open science data, since publishing or obtaining data has become much less expensive and time-consuming.

The Human Genome Project was a major initiative that exemplified the power of open data. It was built upon the so-called Bermuda Principles, stipulating that: "All human genomic sequence information … should be freely available and in the public domain in order to encourage research and development and to maximize its benefit to society'. More recent initiatives such as the Structural Genomics Consortium have illustrated that the open data approach can also be used productively within the context of industrial R&D.

In 2004, the Science Ministers of all nations of the Organisation for Economic Co-operation and Development (OECD), which includes most developed countries of the world, signed a declaration which essentially states that all publicly funded archive data should be made publicly available. Following a request and an intense discussion with data-producing institutions in member states, the OECD published in 2007 the OECD Principles and Guidelines for Access to Research Data from Public Funding as a soft-law recommendation.

Examples of open data in science:

  • The Dataverse Network Project – archival repository software promoting data sharing, persistent data citation, and reproducible research
  • data.uni-muenster.de – Open data about scientific artifacts from the University of Muenster, Germany. Launched in 2011.
  • linkedscience.org/data – Open scientific datasets encoded as Linked Data. Launched in 2011, ended 2018.
  • systemanaturae.org – Open scientific datasets related to wildlife classified by animal species. Launched in 2015.

In government

There are a range of different arguments for government open data. For example, some advocates contend that making government information available to the public as machine readable open data can facilitate government transparency, accountability and public participation. "Open data can be a powerful force for public accountability—it can make existing information easier to analyze, process, and combine than ever before, allowing a new level of public scrutiny." Governments that enable public viewing of data can help citizens engage within the governmental sectors and "add value to that data."

Some make the case that opening up official information can support technological innovation and economic growth by enabling third parties to develop new kinds of digital applications and services.

Several national governments have created websites to distribute a portion of the data they collect. It is a concept for a collaborative project in the municipal Government to create and organize culture for Open Data or Open government data.

Additionally, other levels of government have established open data websites. There are many government entities pursuing Open Data in Canada. Data.gov lists the sites of a total of 40 US states and 46 US cities and counties with websites to provide open data; e.g. the state of Maryland, the state of California, US and New York City.

At the international level, the United Nations has an open data website that publishes statistical data from member states and UN agencies, and the World Bank published a range of statistical data relating to developing countries. The European Commission has created two portals for the European Union: the EU Open Data Portal which gives access to open data from the EU institutions, agencies and other bodies and the PublicData portal that provides datasets from local, regional and national public bodies across Europe.

Italy is the first country to release standard processes and guidelines under a Creative Commons license for spread usage in the Public Administration. The open model is called ODMC - Open Data Management Cycle and was adopted in several regions such as Veneto and Umbria Regions and main cities like Reggio Calabria and Genova.

In October 2015, the Open Government Partnership launched the International Open Data Charter, a set of principles and best practices for the release of governmental open data formally adopted by seventeen governments of countries, states and cities during the OGP Global Summit in Mexico.

In non-profit organizations

Many non-profit organizations offer more or less open access to their data, as long it does not undermine their users', members' or third party's privacy rights. In comparison to for-profit corporations, they do not seek to monetize their data. OpenNWT launched a website offering open data of elections. CIAT offers open data to anybody, who is willing to conduct big data analytics in order to enhance the benefit of international agricultural research. DBLP, which is owned by a non-profit organization Dagstuhl, offers its database of scientific publications from computer science as open data. Non-profit hospitality exchange services offer trustworthy teams of scientists access to their anonymized data for publication of insights to the benefit of humanity. Before becoming a for-profit corporation in 2011, Couchsurfing offered 4 research teams access to its social networking data. In 2015, non-profit hospitality exchange services Bewelcome and Warm Showers provided their data for public research.

National policies and strategies

Germany launched an official strategy in July 2021.

Arguments for and against

The debate on open data is still evolving. The best open government applications seek to empower citizens, to help small businesses, or to create value in some other positive, constructive way. Opening government data is only a way-point on the road to improving education, improving government, and building tools to solve other real world problems. While many arguments have been made categorically, the following discussion of arguments for and against open data highlights that these arguments often depend highly on the type of data and its potential uses.

Arguments made on behalf of open data include the following:

  • "Data belongs to the human race". Typical examples are genomes, data on organisms, medical science, environmental data following the Aarhus Convention
  • Public money was used to fund the work and so it should be universally available.
  • It was created by or at a government institution (this is common in US National Laboratories and government agencies)
  • Facts cannot legally be copyrighted.
  • Sponsors of research do not get full value unless the resulting data are freely available.
  • Restrictions on data re-use create an anticommons.
  • Data are required for the smooth process of running communal human activities and are an important enabler of socio-economic development (health care, education, economic productivity, etc.).
  • In scientific research, the rate of discovery is accelerated by better access to data.
  • Making data open helps combat "data rot" and ensure that scientific research data are preserved over time.
  • Statistical literacy benefits from open data. Instructors can use locally relevant data sets to teach statistical concepts to their students.

It is generally held that factual data cannot be copyrighted. However, publishers frequently add copyright statements (often forbidding re-use) to scientific data accompanying publications. It may be unclear whether the factual data embedded in full text are part of the copyright.

While the human abstraction of facts from paper publications is normally accepted as legal there is often an implied restriction on the machine extraction by robots.

Unlike open access, where groups of publishers have stated their concerns, open data is normally challenged by individual institutions. Their arguments have been discussed less in public discourse and there are fewer quotes to rely on at this time.

Arguments against making all data available as open data include the following:

  • Government funding may not be used to duplicate or challenge the activities of the private sector (e.g. PubChem).
  • Governments have to be accountable for the efficient use of taxpayer's money: If public funds are used to aggregate the data and if the data will bring commercial (private) benefits to only a small number of users, the users should reimburse governments for the cost of providing the data.
  • Open data may lead to exploitation of, and rapid publication of results based on, data pertaining to developing countries by rich and well-equipped research institutes, without any further involvement and/or benefit to local communities (helicopter research); similarly to the historical open access to tropical forests that has led to the disappropriation ("Global Pillage") of plant genetic resources from developing countries.
  • The revenue earned by publishing data can be used to cover the costs of generating and/or disseminating the data, so that the dissemination can continue indefinitely.
  • The revenue earned by publishing data permits non-profit organisations to fund other activities (e.g. learned society publishing supports the society).
  • The government gives specific legitimacy for certain organisations to recover costs (NIST in US, Ordnance Survey in UK).
  • Privacy concerns may require that access to data is limited to specific users or to sub-sets of the data.
  • Collecting, 'cleaning', managing and disseminating data are typically labour- and/or cost-intensive processes – whoever provides these services should receive fair remuneration for providing those services.
  • Sponsors do not get full value unless their data is used appropriately – sometimes this requires quality management, dissemination and branding efforts that can best be achieved by charging fees to users.
  • Often, targeted end-users cannot use the data without additional processing (analysis, apps etc.) – if anyone has access to the data, none may have an incentive to invest in the processing required to make data useful (typical examples include biological, medical, and environmental data).
  • There is no control to the secondary use (aggregation) of open data.

Relation to other open activities

The goals of the Open Data movement are similar to those of other "Open" movements.

  • Open access is concerned with making scholarly publications freely available on the internet. In some cases, these articles include open datasets as well.
  • Open specifications are documents describing file types or protocols, where the documents are openly licensed. Usually these specifications are primarily meant to improve different software handling the same file types or protocols, but monopolists forced by law into open specifications might make it more difficult.
  • Open content is concerned with making resources aimed at a human audience (such as prose, photos, or videos) freely available.
  • Open knowledge. Open Knowledge International argues for openness in a range of issues including, but not limited to, those of open data. It covers (a) scientific, historical, geographic or otherwise (b) Content such as music, films, books (c) Government and other administrative information. Open data is included within the scope of the Open Knowledge Definition, which is alluded to in Science Commons' Protocol for Implementing Open Access Data.
  • Open notebook science refers to the application of the Open Data concept to as much of the scientific process as possible, including failed experiments and raw experimental data.
  • Open-source software is concerned with the open-source licenses under which computer programs can be distributed and is not normally concerned primarily with data.
  • Open educational resources are freely accessible, openly licensed documents and media that are useful for teaching, learning, and assessing as well as for research purposes.
  • Open research/open science/open science data (linked open science) means an approach to open and interconnect scientific assets like data, methods and tools with linked data techniques to enable transparent, reproducible and transdisciplinary research.
  • Open-GLAM (Galleries, Library, Archives, and Museums) is an initiative and network that supports exchange and collaboration between cultural institutions that support open access to their digitised collections. The GLAM-Wiki Initiative helps cultural institutions share their openly licensed resources with the world through collaborative projects with experienced Wikipedia editors. Open Heritage Data is associated with Open GLAM, as openly licensed data in the heritage sector is now frequently used in research, publishing, and programming, particularly in the Digital Humanities.

Funders' mandates

Several funding bodies which mandate Open Access also mandate Open Data. A good expression of requirements (truncated in places) is given by the Canadian Institutes of Health Research (CIHR):

  • to deposit bioinformatics, atomic and molecular coordinate data, experimental data into the appropriate public database immediately upon publication of research results.
  • to retain original data sets for a minimum of five years after the grant. This applies to all data, whether published or not.

Other bodies active in promoting the deposition of data as well as full text include the Wellcome Trust. An academic paper published in 2013 advocated that Horizon 2020 (the science funding mechanism of the EU) should mandate that funded projects hand in their databases as "deliverables" at the end of the project, so that they can be checked for third party usability then shared.

Non-open data

Several mechanisms restrict access to or reuse of data (and several reasons for doing this are given above). They include:

  • making data available for a charge.
  • compilation in databases or websites to which only registered members or customers can have access.
  • use of a proprietary or closed technology or encryption which creates a barrier for access.
  • copyright statements claiming to forbid (or obfuscating) re-use of the data, including the use of "no derivatives" requirements.
  • patent forbidding re-use of the data (for example the 3-dimensional coordinates of some experimental protein structures have been patented).
  • restriction of robots to websites, with preference to certain search engines.
  • aggregating factual data into "databases" which may be covered by "database rights" or "database directives" (e.g. Directive on the legal protection of databases).
  • time-limited access to resources such as e-journals (which on traditional print were available to the purchaser indefinitely).
  • "webstacles", or the provision of single data points as opposed to tabular queries or bulk downloads of data sets.
  • political, commercial or legal pressure on the activity of organisations providing Open Data (for example the American Chemical Society lobbied the US Congress to limit funding to the National Institutes of Health for its Open PubChem data).

Open innovation

From Wikipedia, the free encyclopedia

Open innovation is a term used to promote an information age mindset toward innovation that runs counter to the secrecy and silo mentality of traditional corporate research labs. The benefits and driving forces behind increased openness have been noted and discussed as far back as the 1960s, especially as it pertains to interfirm cooperation in R&D. Use of the term 'open innovation' in reference to the increasing embrace of external cooperation in a complex world has been promoted in particular by Henry Chesbrough, adjunct professor and faculty director of the Center for Open Innovation of the Haas School of Business at the University of California, and Maire Tecnimont Chair of Open Innovation at Luiss.

The term was originally referred to as "a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology". More recently, it is defined as "a distributed innovation process based on purposively managed knowledge flows across organizational boundaries, using pecuniary and non-pecuniary mechanisms in line with the organization's business model". This more recent definition acknowledges that open innovation is not solely firm-centric: it also includes creative consumers and communities of user innovators. The boundaries between a firm and its environment have become more permeable; innovations can easily transfer inward and outward between firms and other firms and between firms and creative consumers, resulting in impacts at the level of the consumer, the firm, an industry, and society.

Because innovations tend to be produced by outsiders and founders in startups, rather than existing organizations, the central idea behind open innovation is that, in a world of widely distributed knowledge, companies cannot afford to rely entirely on their own research, but should instead buy or license processes or inventions (i.e. patents) from other companies. This is termed inbound open innovation. In addition, internal inventions not being used in a firm's business should be taken outside the company (e.g. through licensing, joint ventures or spin-offs). This is called outbound open innovation.

The open innovation paradigm can be interpreted to go beyond just using external sources of innovation such as customers, rival companies, and academic institutions, and can be as much a change in the use, management, and employment of intellectual property as it is in the technical and research driven generation of intellectual property. In this sense, it is understood as the systematic encouragement and exploration of a wide range of internal and external sources for innovative opportunities, the integration of this exploration with firm capabilities and resources, and the exploitation of these opportunities through multiple channels.

In addition, as open innovation explores a wide range of internal and external sources, it could be not just analyzed in the level of company, but also it can be analyzed at inter-organizational level, intra-organizational level, extra-organizational and at industrial, regional and society (Bogers et al., 2017).

Advantages

Open innovation offers several benefits to companies operating on a program of global collaboration:

  • Reduced cost of conducting research and development
  • Potential for improvement in development productivity
  • Incorporation of customers early in the development process
  • Increase in accuracy for market research and customer targeting
  • Improve the performance in planning and delivering projects
  • Potential for synergism between internal and external innovations
  • Potential for viral marketing
  • Enhanced digital transformation
  • Potential for completely new business models
  • Leveraging of innovation ecosystems

Disadvantages

Implementing a model of open innovation is naturally associated with a number of risks and challenges, including:

  • Possibility of revealing information not intended for sharing
  • Potential for the hosting organization to lose their competitive advantage as a consequence of revealing intellectual property
  • Increased complexity of controlling innovation and regulating how contributors affect a project
  • Devising a means to properly identify and incorporate external innovation
  • Realigning innovation strategies to extend beyond the firm in order to maximize the return from external innovation

Models

Government driven

In the UK, knowledge transfer partnerships (KTP) are a funding mechanism encouraging the partnership between a firm and a knowledge-based partner. A KTP is a collaboration program between a knowledge-based partner (i.e. a research institution), a company partner and one or more associates (i.e. recently qualified persons such as graduates). KTP initiatives aim to deliver significant improvement in business partners’ profitability as a direct result of the partnership through enhanced quality and operations, increased sales and access to new markets. At the end of their KTP project, the three actors involved have to prepare a final report that describes KTP initiative supported the achievement of the project's innovation goals.

Product platforming

This approach involves developing and introducing a partially completed product, for the purpose of providing a framework or tool-kit for contributors to access, customize, and exploit. The goal is for the contributors to extend the platform product's functionality while increasing the overall value of the product for everyone involved.

Readily available software frameworks such as a software development kit (SDK), or an application programming interface (API) are common examples of product platforms. This approach is common in markets with strong network effects where demand for the product implementing the framework (such as a mobile phone, or an online application) increases with the number of developers that are attracted to use the platform tool-kit. The high scalability of platforming often results in an increased complexity of administration and quality assurance.

Idea competitions

This model entails implementing a system that encourages competitiveness among contributors by rewarding successful submissions. Developer competitions such as hackathon events and many crowdsourcing initiatives fall under this category of open innovation. This method provides organizations with inexpensive access to a large quantity of innovative ideas, while also providing a deeper insight into the needs of their customers and contributors.

Customer immersion

While mostly oriented toward the end of the product development cycle, this technique involves extensive customer interaction through employees of the host organization. Companies are thus able to accurately incorporate customer input, while also allowing them to be more closely involved in the design process and product management cycle.

Collaborative product design and development

Similarly to product platforming, an organization incorporates their contributors into the development of the product. This differs from platforming in the sense that, in addition to the provision of the framework on which contributors develop, the hosting organization still controls and maintains the eventual products developed in collaboration with their contributors. This method gives organizations more control by ensuring that the correct product is developed as fast as possible, while reducing the overall cost of development. Dr. Henry Chesbrough recently supported this model for open innovation in the optics and photonics industry.

Innovation networks

Similarly to idea competitions, an organization leverages a network of contributors in the design process by offering a reward in the form of an incentive. The difference relates to the fact that the network of contributors are used to develop solutions to identified problems within the development process, as opposed to new products. Emphasis needs to be placed on assessing organisational capabilities to ensure value creation in open innovation.

In science

In Austria the Ludwig Boltzmann Gesellschaft started a project named "Tell us!" about mental health issues and used the concept of open innovation to crowdsource research questions. The institute also launched the first "Lab for Open Innovation in Science" to teach 20 selected scientists the concept of open innovation over the course of one year.

Innovation intermediaries

Innovation intermediaries are persons or organizations that facilitate innovation by linking multiple independent players in order to encourage collaboration and open innovation, thus strengthening the innovation capacity of companies, industries, regions, or nations. As such, they may be key players for the transformation from closed to open modes of innovation.

Versus closed innovation

The paradigm of closed innovation holds that successful innovation requires control. Particularly, a company should control the generation of their own ideas, as well as production, marketing, distribution, servicing, financing, and supporting. What drove this idea is that, in the early twentieth century, academic and government institutions were not involved in the commercial application of science. As a result, it was left up to other corporations to take the new product development cycle into their own hands. There just was not the time to wait for the scientific community to become more involved in the practical application of science. There also was not enough time to wait for other companies to start producing some of the components that were required in their final product. These companies became relatively self-sufficient, with little communication directed outwards to other companies or universities.

Throughout the years several factors emerged that paved the way for open innovation paradigms:

  • The increasing availability and mobility of skilled workers
  • The growth of the venture capital market
  • External options for ideas sitting on the shelf
  • The increasing capability of external suppliers

These four factors have resulted in a new market of knowledge. Knowledge is not anymore proprietary to the company. It resides in employees, suppliers, customers, competitors and universities. If companies do not use the knowledge they have inside, someone else will. Innovation can be generated either by means of closed innovation or by open innovation paradigms. There is an ongoing debate on which paradigm will dominate in the future.

Terminology

Modern research of open innovation is divided into two groups, which have several names, but are similar in their essence (discovery and exploitation; outside-in and inside-out; inbound and outbound). The common factor for different names is the direction of innovation, whether from outside the company in, or from inside the company out:

Revealing (non-pecuniary outbound innovation)

This type of open innovation is when a company freely shares its resources with other partners, without an instant financial reward. The source of profit has an indirect nature and is manifested as a new type of business model.

Selling (pecuniary outbound innovation)

In this type of open innovation a company commercialises its inventions and technology through selling or licensing technology to a third party.

Sourcing (non-pecuniary inbound innovation)

This type of open innovation is when companies use freely available external knowledge, as a source of internal innovation. Before starting any internal R&D project a company should monitor the external environment in search for existing solutions, thus, in this case, internal R&D become tools to absorb external ideas for internal needs.

Acquiring (pecuniary inbound innovation)

In this type of open innovation a company is buying innovation from its partners through licensing, or other procedures, involving monetary reward for external knowledge

Versus open source

Open source and open innovation might conflict on patent issues. This conflict is particularly apparent when considering technologies that may save lives, or other open-source-appropriate technologies that may assist in poverty reduction or sustainable development. However, open source and open innovation are not mutually exclusive, because participating companies can donate their patents to an independent organization, put them in a common pool, or grant unlimited license use to anybody. Hence some open-source initiatives can merge these two concepts: this is the case for instance for IBM with its Eclipse platform, which the company presents as a case of open innovation, where competing companies are invited to cooperate inside an open-innovation network.

In 1997, Eric Raymond, writing about the open-source software movement, coined the term the cathedral and the bazaar. The cathedral represented the conventional method of employing a group of experts to design and develop software (though it could apply to any large-scale creative or innovative work). The bazaar represented the open-source approach. This idea has been amplified by a lot of people, notably Don Tapscott and Anthony D. Williams in their book Wikinomics. Eric Raymond himself is also quoted as saying that 'one cannot code from the ground up in bazaar style. One can test, debug, and improve in bazaar style, but it would be very hard to originate a project in bazaar mode'. In the same vein, Raymond is also quoted as saying 'The individual wizard is where successful bazaar projects generally start'.

The next level

In 2014, Chesbrough and Bogers describe open innovation as a distributed innovation process that is based on purposefully managed knowledge flows across enterprise boundaries. Open innovation is hardly aligned with the ecosystem theory and not a linear process. Fasnacht's adoption for the financial services uses open innovation as basis and includes alternative forms of mass collaboration, hence, this makes it complex, iterative, non-linear, and barely controllable. The increasing interactions between business partners, competitors, suppliers, customers, and communities create a constant growth of data and cognitive tools. Open innovation ecosystems bring together the symbiotic forces of all supportive firms from various sectors and businesses that collectively seek to create differentiated offerings. Accordingly, the value captured from a network of multiple actors and the linear value chain of individual firms combined, creates the new delivery model that Fasnacht declares "value constellation".

Open innovation ecosystem

The term Open Innovation Ecosystem consists of three parts that describe the foundations of the approach of open innovation, innovation systems and business ecosystems.

While James F. Moore researched business ecosystems in manufacturing around a specific business or branch, the open model of innovation with the ecosystem theory was recently studied in various industries. Traitler et all. researched it 2010 and used it for R&D, stating that global innovation needs alliances based on compatible differences. Innovation partnerships based on sharing knowledge represents a paradigm shift toward accelerating co‐development of sustainable innovation. West researched open innovation ecosystems in the software industry, following studies in the food industry that show how a small firm thrived and became a business success based on building an ecosystem that shares knowledge, encourages individuals' growth, and embeds trust among participants such as suppliers, alumni chef and staff, and food writers. Other adoptions include the telecom industry or smart cities.

Ecosystems foster collaboration and accelerate the dissemination of knowledge through the network effect, in fact, value creation increases with each actor in the ecosystem, which in turn nurtures the ecosystem as such.

A digital platform is essential to make the innovation ecosystem work as it aligns various actors to achieve a mutually beneficial purpose. Parker explained that with platform revolution and described how networked Markets are transforming the economy. Basically there are three dimensions that increasingly converge, i.e. e-commerce, social media and logistics and finance, termed by Daniel Fasnacht as the golden triangle of ecosystems.

Business ecosystems are increasingly used and drive digital growth, and pioneering firms in China use their technological capabilities and link client data to historical transactions and social behaviour to offer tailored financial services among luxury goods or health services. Such open collaborative environment changes the client experience and adds value to consumers. The drawback is that it is also threatening incumbent banks from the U.S. and Europe due to its legacies and lack of agility and flexibility.

Cooperative

From Wikipedia, the free encyclopedia ...