Parity violation in weak interactions was first postulated by Tsung Dao Lee and Chen Ning Yang in 1956 as a solution to the τ-θ puzzle.
They suggested a number of experiments to test if the weak interaction
is invariant under parity. These experiments were performed half a year
later and they confirmed that the weak interactions of the known
particles violate parity.
However, parity symmetry can be restored as a fundamental
symmetry of nature if the particle content is enlarged so that every
particle has a mirror partner. The theory in its modern form was
described in 1991, although the basic idea dates back further.
Mirror particles interact amongst themselves in the same way as
ordinary particles, except where ordinary particles have left-handed
interactions, mirror particles have right-handed interactions. In this
way, it turns out that mirror reflection symmetry can exist as an exact
symmetry of nature, provided that a "mirror" particle exists for every
ordinary particle. Parity can also be spontaneously broken depending on
the Higgs potential.
While in the case of unbroken parity symmetry the masses of particles
are the same as their mirror partners, in case of broken parity symmetry
the mirror partners are lighter or heavier.
Mirror matter, if it exists, would need to use the weak force to
interact with ordinary matter. This is because the forces between mirror
particles are mediated by mirror bosons. With the exception of the graviton,
none of the known bosons can be identical to their mirror partners. The
only way mirror matter can interact with ordinary matter via forces
other than gravity is via kinetic mixing of mirror bosons with ordinary bosons or via the exchange of Holdom particles. These interactions can only be very weak. Mirror particles have therefore been suggested as candidates for the inferred dark matter in the universe.
In another context, mirror matter has been proposed to give rise to an effective Higgs mechanism responsible for the electroweak symmetry breaking. In such a scenario, mirror fermions have masses on the order of 1 TeV since they interact with an additional interaction, while some of the mirror bosons are identical to the ordinary gauge bosons. In order to emphasize the distinction of this model from the ones above, these mirror particles are usually called katoptrons.
Observational effects
Abundance
Mirror matter could have been diluted to unobservably low densities during the inflation epoch. Sheldon Glashow has shown that if at some high energy scale particles exist which interact strongly with both ordinary and mirror particles, radiative corrections will lead to a mixing between photons and mirror photons.
This mixing has the effect of giving mirror electric charges a very
small ordinary electric charge. Another effect of photon–mirror photon
mixing is that it induces oscillations between positronium and mirror positronium. Positronium could then turn into mirror positronium and then decay into mirror photons.
The mixing between photons and mirror photons could be present in tree level Feynman diagrams
or arise as a consequence of quantum corrections due to the presence of
particles that carry both ordinary and mirror charges. In the latter
case, the quantum corrections have to vanish at the one and two loop
level Feynman diagrams, otherwise the predicted value of the kinetic
mixing parameter would be larger than experimentally allowed.
An experiment to measure this effect is currently being planned.
Dark matter
If
mirror matter does exist in large abundances in the universe and if it
interacts with ordinary matter via photon-mirror photon mixing, then
this could be detected in dark matter direct detection experiments such
as DAMA/NaI and its successor DAMA/LIBRA.
In fact, it is one of the few dark matter candidates which can explain
the positive DAMA/NaI dark matter signal whilst still being consistent
with the null results of other dark matter experiments.
Electromagnetic effects
Mirror matter may also be detected in electromagnetic field penetration experiments and there would also be consequences for planetary science and astrophysics.
GZK puzzle
Mirror matter could also be responsible for the GZK puzzle. Topological defects in the mirror sector could produce mirror neutrinos which can oscillate to ordinary neutrinos. Another possible way to evade the GZK bound is via neutron–mirror neutron oscillations.
Gravitational effects
If
mirror matter is present in the universe with sufficient abundance then
its gravitational effects can be detected. Because mirror matter is
analogous to ordinary matter, it is then to be expected that a fraction
of the mirror matter exists in the form of mirror galaxies, mirror
stars, mirror planets etc. These objects can be detected using
gravitational microlensing.
One would also expect that some fraction of stars have mirror objects
as their companion. In such cases one should be able to detect periodic Doppler shifts in the spectrum of the star. There are some hints that such effects may already have been observed.
Open science data or Open Research Data is a type of open data
focused on publishing observations and results of scientific activities
available for anyone to analyze and reuse. A major purpose of the drive
for open data is to allow the verification of scientific claims, by
allowing others to look at the reproducibility of results, and to allow data from many sources to be integrated to give new knowledge. While the idea of open science data has been actively promoted since the 1950s, the rise of the Internet has significantly lowered the cost and time required to publish or obtain data.
History
The concept of open access to scientific data was institutionally established with the formation of the World Data Center system (now the World Data System), in preparation for the International Geophysical Year of 1957–1958. The International Council of Scientific Unions (now the International Council for Science)
established several World Data Centers to minimize the risk of data
loss and to maximize data accessibility, further recommending in 1955
that data be made available in machine-readable form.
The first initiative to create a database of electronic bibliography of open access data was the Educational Resources Information Center (ERIC) in 1966. In the same year, MEDLINE was created – a free access online database managed by the National Library of Medicine and the National Institute of Health (USA) with bibliographical citations from journals in the biomedical area, which later would be called PubMed, currently with over 14 million complete articles.
In 1995 GCDIS (US) put its position clearly in
On the Full and Open Exchange of Scientific Data (A publication of the Committee on Geophysical and Environmental Data - National Research Council):
"The Earth's atmosphere, oceans,
and biosphere form an integrated system that transcends national
boundaries. To understand the elements of the system, the way they
interact, and how they have changed with time, it is necessary to
collect and analyze environmental data from all parts of the world.
Studies of the global environment require international collaboration
for many reasons:
to address global issues, it is essential to have global data sets and products derived from these data sets;
it is more efficient and cost-effective for each nation to share its
data and information than to collect everything it needs independently;
and
the implementation of effective policies addressing issues of the
global environment requires the involvement from the outset of nearly
all nations of the world.
International programs for global change research and environmental
monitoring crucially depend on the principle of full and open data
exchange (i.e., data and information are made available without
restriction, on a non-discriminatory basis, for no more than the cost of
reproduction and distribution)."
The last phrase highlights the traditional cost of disseminating
information by print and post. It is the removal of this cost through
the Internet which has made data vastly easier to disseminate
technically. It is correspondingly cheaper to create, sell and control
many data resources and this has led to the current concerns over
non-open data.
More recent uses of the term include:
SAFARI 2000 (South Africa, 2001) used a license informed by ICSU and NASA policies
The human genome (Kent, 2002)
An Open Data Consortium on geospatial data (2003)
Manifesto for Open Chemistry (Murray-Rust and Rzepa, 2004) (2004)
Presentations to JISC and OAI under the title "open data" (Murray-Rust, 2005)
Science Commons launch (2004)
First Open Knowledge Forums (London, UK) run by the Open Knowledge Foundation (London UK) on open data in relation to civic information and geodata (February and April 2005)
The Petition for Open Data in Crystallography is launched by the Crystallography Open Database Advisory Board.(2005)
XML Conference & Exposition 2005 (Connolly 2005)
SPARC Open Data mailing list (2005)
First draft of the Open Knowledge Definition explicitly references "Open Data" (2005)
XTech (Dumbill, 2005), (Bray and O'Reilly 2006)
In 2004, the Science Ministers of all nations of the OECD
(Organisation for Economic Co-operation and Development), which
includes most developed countries of the world, signed a declaration
which essentially states that all publicly funded archive data should be
made publicly available.
Following a request and an intense discussion with data-producing
institutions in member states, the OECD published in 2007 the OECD Principles and Guidelines for Access to Research Data from Public Funding as a soft-law recommendation.
In 2005 Edd Dumbill introduced an “Open Data” theme in XTech, including:
In 2006 Science Commons
ran a 2-day conference in Washington where the primary topic could be
described as Open Data. It was reported that the amount of
micro-protection of data (e.g. by license) in areas such as
biotechnology was creating a Tragedy of the anticommons. In this the costs of obtaining licenses from a large number of owners made it uneconomic to do research in the area.
In 2007 SPARC and Science Commons announced a consolidation and enhancement of their author addenda.
In 2007 the OECD
(Organisation for Economic Co-operation and Development) published the
Principles and Guidelines for Access to Research Data from Public
Funding. The Principles state that:
Access to research data increases the returns from public
investment in this area; reinforces open scientific inquiry; encourages
diversity of studies and opinion; promotes new areas of work and
enables the exploration of topics not envisioned by the initial
investigators.
In 2010 the Panton Principles launched, advocating Open Data in science and setting out for principles to which providers must comply to have their data Open.
In 2011 LinkedScience.org was launched to realize the approach of the Linked Open Science to openly share and interconnect scientific assets like datasets, methods, tools and vocabularies.
In 2012, the Royal Society published a major report, "Science as an Open Enterprise", advocating open scientific data and considering its benefits and requirements.
In 2013 the G8 Science Ministers released a Statement supporting a set of principles for open scientific research data
In 2015 the World Data System of the International Council for Science adopted a new set of Data Sharing Principles
to embody the spirit of 'open science'. These Principles are in line
with data policies of national and international initiatives and they
express core ethical commitments operationalized in the WDS
Certification of trusted data repositories and service.
By "open access" to this literature, we mean its free availability on
the public internet, permitting any users to read, download, copy,
distribute, print, search, or link to the full texts of these articles,
crawl them for indexing, pass them as data to software, or use them for
any other lawful purpose, without financial, legal, or technical
barriers other than those inseparable from gaining access to the
internet itself. The only constraint on reproduction and distribution,
and the only role for copyright in this domain, should be to give
authors control over the integrity of their work and the right to be
properly acknowledged and cited.
The logic of the declaration permits re-use of the data although the
term "literature" has connotations of human-readable text and can imply a
scholarly publication process. In Open Access discourse the term
"full-text" is often used which does not emphasize the data contained
within or accompanying the publication.
Some Open Access publishers do not require the authors to assign
copyright and the data associated with these publications can normally
be regarded as Open Data. Some publishers have Open Access strategies
where the publisher requires assignment of the copyright and where it is
unclear that the data in publications can be truly regarded as Open
Data.
The ALPSP and STM publishers have issued a statement about the desirability of making data freely available:
Publishers recognise that in many disciplines data itself, in various
forms, is now a key output of research. Data searching and mining tools
permit increasingly sophisticated use of raw data. Of course, journal
articles provide one ‘view’ of the significance and interpretation of
that data – and conference presentations and informal exchanges may
provide other ‘views’ – but data itself is an increasingly important
community resource. Science is best advanced by allowing as many
scientists as possible to have access to as much prior data as possible;
this avoids costly repetition of work, and allows creative new
integration and reworking of existing data.
and
We believe that, as a general principle, data sets, the raw data
outputs of research, and sets or sub-sets of that data which are
submitted with a paper to a journal, should wherever possible be made
freely accessible to other scholars. We believe that the best practice
for scholarly journal publishers is to separate supporting data from the
article itself, and not to require any transfer of or ownership in such
data or data sets as a condition of publication of the article in
question.
Even though this statement was without any effect on the open
availability of primary data related to publications in journals of the
ALPSP and STM members. Data tables provided by the authors as supplement
with a paper are still available to subscribers only.
Relation to peer review
In
an effort to address issues with the reproducibility of research
results, some scholars are asking that authors agree to share their raw
data as part of the scholarly peer review process.
As far back as 1962, for example, a number of psychologists have
attempted to obtain raw data sets from other researchers, with mixed
results, in order to reanalyze them. A recent attempt resulted in only
seven data sets out of fifty requests. The notion of obtaining, let
alone requiring, open data as a condition of peer review remains
controversial.
To make sense of scientific data they must be analysed. In all but
the simplest cases, this is done by software. The extensive use of
software poses problems for the reproducibility
of research. To keep research reproducible, it is necessary to publish
not only all data, but also the source code of all software used, and
all the parametrization used in running this software. Presently, these
requests are rarely ever met. Ways to come closer to reproducible
scientific computation are discussed under the catchword "open research computation".
Clear
labeling of the licensing terms is a key component of open data, and
icons like the one pictured here are being used for that purpose.
Open Data is the idea that some data should be freely
available to everyone to use and republish as they wish, without
restrictions from copyright, patents or other mechanisms of control.
The goals of the open-source data movement are similar to those of
other "open(-source)" movements such as open-source software, hardware, open content, open specifications, open education, open educational resources, open government, open knowledge, open access, open science, and the open web. Paradoxically, the growth of the open data movement is paralleled by a rise in intellectual property rights. The philosophy behind open data has been long established (for example in the Mertonian tradition of science), but the term "open data" itself is recent, gaining popularity with the rise of the Internet and World Wide Web and, especially, with the launch of open-data government initiatives such as Data.gov, Data.gov.uk and Data.gov.in.
Open data can also be linked data; when it is, it is linked open data.
One of the most important forms of open data is open government data
(OGD), which is a form of open data created by ruling government
institutions. Open government data's importance is borne from it being a
part of citizens' everyday lives, down to the most routine/mundane
tasks that are seemingly far removed from government.
The abbreviation FAIR/O data is sometimes used to indicate that the dataset or database in question complies with the principles of FAIR data and also carries an explicit data‑capable open license.
Overview
The
concept of open data is not new, but a formalized definition is
relatively new. Conceptually, open data as a phenomenon denotes that
governmental data should be available to anyone with a possibility of
redistribution in any form without any copyright restriction.
One more definition is the Open Definition which can be summarized in
the statement that "A piece of data is open if anyone is free to use,
reuse, and redistribute it – subject only, at most, to the requirement
to attribute and/or share-alike." Other definitions, including the Open Data Institute's
"Open data is data that anyone can access, use or share", have an
accessible short version of the definition but refer to the formal
definition.
Open data may include non-textual material such as maps, genomes, connectomes, chemical compounds,
mathematical and scientific formulae, medical data, and practice,
bioscience and biodiversity. Problems often arise because these are
commercially valuable or can be aggregated into works of value. Access
to, or re-use of, the data is controlled by organisations, both public
and private. Control may be through access restrictions, licenses, copyright, patents
and charges for access or re-use. Advocates of open data argue that
these restrictions are against the common good and that these data
should be made available without restriction or fee. In addition, it is
important that the data are re-usable without requiring further
permission, though the types of re-use (such as the creation of
derivative works) may be controlled by a license.
A typical depiction of the need for open data:
Numerous scientists have pointed
out the irony that right at the historical moment when we have the
technologies to permit worldwide availability and distributed process of
scientific data, broadening collaboration and accelerating the pace and
depth of discovery... we are busy locking up that data and preventing the use of correspondingly advanced technologies on knowledge.
— John Wilbanks, VP Science, Creative Commons
Creators of data often do not consider the need to state the
conditions of ownership, licensing and re-use; instead presuming that
not asserting copyright puts the data into the public domain.
For example, many scientists do not regard the published data arising
from their work to be theirs to control and consider the act of
publication in a journal to be an implicit release of data into the commons. However, the lack of a license makes it difficult to determine the status of a data set
and may restrict the use of data offered in an "Open" spirit. Because
of this uncertainty it is also possible for public or private
organizations to aggregate said data, claim that it is protected by copyright and then resell it.
The issue of indigenous knowledge
(IK) poses a great challenge in terms of capturing, storage and
distribution. Many societies in third-world countries lack the
technicality processes of managing the IK.
At his presentation at the XML 2005 conference, Connolly displayed these two quotations regarding open data:
"I want my data back." (Jon Bosak circa 1997)
"I've long believed that customers of any application own the data they enter into it." (This quote refers to Veen's own heart-rate data.)
Major sources
The State of Open Data, a 2019 book from African Minds
Open data can come from any source. This section lists some of the
fields that publish (or at least discuss publishing) a large amount of
open data.
The concept of open access to scientific data was institutionally established with the formation of the World Data Center system, in preparation for the International Geophysical Year of 1957–1958. The International Council of Scientific Unions (now the International Council for Science) oversees several World Data Centres with the mandate to minimize the risk of data loss and to maximize data accessibility.
While the open-science-data movement long predates the Internet,
the availability of fast, ubiquitous networking has significantly
changed the context of Open science data, since publishing or obtaining data has become much less expensive and time-consuming.
The Human Genome Project was a major initiative that exemplified the power of open data. It was built upon the so-called Bermuda Principles,
stipulating that: "All human genomic sequence information … should be
freely available and in the public domain in order to encourage research
and development and to maximize its benefit to society'.
More recent initiatives such as the Structural Genomics Consortium have
illustrated that the open data approach can also be used productively
within the context of industrial R&D.
In 2004, the Science Ministers of all nations of the Organisation for Economic Co-operation and Development
(OECD), which includes most developed countries of the world, signed a
declaration which essentially states that all publicly funded archive
data should be made publicly available.
Following a request and an intense discussion with data-producing
institutions in member states, the OECD published in 2007 the OECD Principles and Guidelines for Access to Research Data from Public Funding as a soft-law recommendation.
Examples of open data in science:
The Dataverse Network Project – archival repository software promoting data sharing, persistent data citation, and reproducible research
data.uni-muenster.de – Open data about scientific artifacts from the University of Muenster, Germany. Launched in 2011.
linkedscience.org/data – Open scientific datasets encoded as Linked Data. Launched in 2011, ended 2018.
systemanaturae.org – Open scientific datasets related to wildlife classified by animal species. Launched in 2015.
There are a range of different arguments for government open data.
For example, some advocates contend that making government information
available to the public as machine readable open data can facilitate
government transparency, accountability and public participation. "Open
data can be a powerful force for public accountability—it can make
existing information easier to analyze, process, and combine than ever
before, allowing a new level of public scrutiny."
Governments that enable public viewing of data can help citizens engage
within the governmental sectors and "add value to that data."
Some make the case that opening up official information can
support technological innovation and economic growth by enabling third
parties to develop new kinds of digital applications and services.
Several national governments have created websites to distribute a
portion of the data they collect. It is a concept for a collaborative
project in the municipal Government to create and organize culture for
Open Data or Open government data.
Additionally, other levels of government have established open data websites. There are many government entities pursuing Open Data in Canada. Data.gov lists the sites of a total of 40 US states and 46 US cities and counties with websites to provide open data; e.g. the state of Maryland, the state of California, US and New York City.
At the international level, the United Nations has an open data
website that publishes statistical data from member states and UN
agencies, and the World Bank published a range of statistical data relating to developing countries. The European Commission has created two portals for the European Union: the EU Open Data Portal which gives access to open data from the EU institutions, agencies and other bodies and the PublicData portal that provides datasets from local, regional and national public bodies across Europe.
Italy is the first country to release standard processes and guidelines under a Creative Commons license for spread usage in the Public Administration. The open model is called ODMC - Open Data Management Cycle and was adopted in several regions such as Veneto and Umbria Regions and main cities like Reggio Calabria and Genova.
In October 2015, the Open Government Partnership launched the International Open Data Charter,
a set of principles and best practices for the release of governmental
open data formally adopted by seventeen governments of countries, states
and cities during the OGP Global Summit in Mexico.
In non-profit organizations
Many non-profit organizations offer more or less open access to their data, as long it does not undermine their users', members' or third party's privacy rights. In comparison to for-profit corporations, they do not seek to monetize their data. OpenNWT launched a website offering open data of elections. CIAT
offers open data to anybody, who is willing to conduct big data
analytics in order to enhance the benefit of international agricultural
research. DBLP, which is owned by a non-profit organization Dagstuhl, offers its database of scientific publications from computer science as open data. Non-profit hospitality exchange services offer trustworthy teams of scientists access to their anonymized data for publication of insights to the benefit of humanity. Before becoming a for-profit corporation in 2011, Couchsurfing offered 4 research teams access to its social networking data. In 2015, non-profit hospitality exchange services Bewelcome and Warm Showers provided their data for public research.
National policies and strategies
Germany launched an official strategy in July 2021.
Arguments for and against
The debate on open data is still evolving. The best open government applications seek to empower citizens, to help small businesses,
or to create value in some other positive, constructive way. Opening
government data is only a way-point on the road to improving education,
improving government, and building tools to solve other real world
problems. While many arguments have been made categorically,
the following discussion of arguments for and against open data
highlights that these arguments often depend highly on the type of data
and its potential uses.
Arguments made on behalf of open data include the following:
Sponsors of research do not get full value unless the resulting data are freely available.
Restrictions on data re-use create an anticommons.
Data are required for the smooth process of running communal human activities and are an important enabler of socio-economic development (health care, education, economic productivity, etc.).
In scientific research, the rate of discovery is accelerated by better access to data.
Making data open helps combat "data rot" and ensure that scientific research data are preserved over time.
Statistical literacy benefits from open data. Instructors can use
locally relevant data sets to teach statistical concepts to their
students.
It is generally held that factual data cannot be copyrighted.
However, publishers frequently add copyright statements (often
forbidding re-use) to scientific data accompanying publications. It may
be unclear whether the factual data embedded in full text are part of
the copyright.
While the human abstraction of facts from paper publications is
normally accepted as legal there is often an implied restriction on the
machine extraction by robots.
Unlike open access, where groups of publishers have stated their concerns, open data is normally challenged by individual institutions. Their arguments have been discussed less in public discourse and there are fewer quotes to rely on at this time.
Arguments against making all data available as open data include the following:
Government funding may not be used to duplicate or challenge the activities of the private sector (e.g. PubChem).
Governments have to be accountable for the efficient use of
taxpayer's money: If public funds are used to aggregate the data and if
the data will bring commercial (private) benefits to only a small number
of users, the users should reimburse governments for the cost of
providing the data.
Open data may lead to exploitation of, and rapid publication of
results based on, data pertaining to developing countries by rich and
well-equipped research institutes, without any further involvement
and/or benefit to local communities (helicopter research);
similarly to the historical open access to tropical forests that has
led to the disappropriation ("Global Pillage") of plant genetic
resources from developing countries.
The revenue earned by publishing data can be used to cover the costs
of generating and/or disseminating the data, so that the dissemination
can continue indefinitely.
The revenue earned by publishing data permits non-profit
organisations to fund other activities (e.g. learned society publishing
supports the society).
The government gives specific legitimacy for certain organisations to recover costs (NIST in US, Ordnance Survey in UK).
Privacy concerns may require that access to data is limited to specific users or to sub-sets of the data.
Collecting, 'cleaning', managing and disseminating data are
typically labour- and/or cost-intensive processes – whoever provides
these services should receive fair remuneration for providing those
services.
Sponsors do not get full value unless their data is used
appropriately – sometimes this requires quality management,
dissemination and branding efforts that can best be achieved by charging
fees to users.
Often, targeted end-users cannot use the data without additional
processing (analysis, apps etc.) – if anyone has access to the data,
none may have an incentive to invest in the processing required to make
data useful (typical examples include biological, medical, and
environmental data).
There is no control to the secondary use (aggregation) of open data.
Relation to other open activities
The goals of the Open Data movement are similar to those of other "Open" movements.
Open access
is concerned with making scholarly publications freely available on the
internet. In some cases, these articles include open datasets as well.
Open specifications
are documents describing file types or protocols, where the documents
are openly licensed. Usually these specifications are primarily meant to
improve different software handling the same file types or protocols,
but monopolists forced by law into open specifications might make it
more difficult.
Open content is concerned with making resources aimed at a human audience (such as prose, photos, or videos) freely available.
Open knowledge. Open Knowledge International
argues for openness in a range of issues including, but not limited to,
those of open data. It covers (a) scientific, historical, geographic or
otherwise (b) Content such as music, films, books (c) Government and
other administrative information. Open data is included within the scope
of the Open Knowledge Definition, which is alluded to in Science Commons' Protocol for Implementing Open Access Data.
Open notebook science
refers to the application of the Open Data concept to as much of the
scientific process as possible, including failed experiments and raw
experimental data.
Open-source software is concerned with the open-source licenses under which computer programs can be distributed and is not normally concerned primarily with data.
Open educational resources
are freely accessible, openly licensed documents and media that are
useful for teaching, learning, and assessing as well as for research
purposes.
Open research/open science/open science data (linked open science) means an approach to open and interconnect scientific assets like data, methods and tools with linked data techniques to enable transparent, reproducible and transdisciplinary research.
Open-GLAM (Galleries, Library, Archives, and Museums) is an initiative and network that supports exchange and collaboration between cultural institutions that support open access to their digitised collections. The GLAM-Wiki Initiative
helps cultural institutions share their openly licensed resources with
the world through collaborative projects with experienced Wikipedia
editors. Open Heritage Data is associated with Open GLAM, as openly
licensed data in the heritage sector is now frequently used in research,
publishing, and programming, particularly in the Digital Humanities.
Funders' mandates
Several
funding bodies which mandate Open Access also mandate Open Data. A good
expression of requirements (truncated in places) is given by the Canadian Institutes of Health Research (CIHR):
to deposit bioinformatics, atomic and molecular coordinate data,
experimental data into the appropriate public database immediately upon
publication of research results.
to retain original data sets for a minimum of five years after the grant. This applies to all data, whether published or not.
Other bodies active in promoting the deposition of data as well as full text include the Wellcome Trust. An academic paper published in 2013 advocated that Horizon 2020
(the science funding mechanism of the EU) should mandate that funded
projects hand in their databases as "deliverables" at the end of the
project, so that they can be checked for third party usability then
shared.
Non-open data
Several mechanisms restrict access to or reuse of data (and several reasons for doing this are given above). They include:
making data available for a charge.
compilation in databases or websites to which only registered members or customers can have access.
use of a proprietary or closed technology or encryption which creates a barrier for access.
copyright statements claiming to forbid (or obfuscating) re-use of the data, including the use of "no derivatives" requirements.
patent forbidding re-use of the data (for example the 3-dimensional
coordinates of some experimental protein structures have been patented).
restriction of robots to websites, with preference to certain search engines.
Open innovation is a term used to promote an information age mindset toward innovation that runs counter to the secrecy and silo mentality
of traditional corporate research labs. The benefits and driving forces
behind increased openness have been noted and discussed as far back as
the 1960s, especially as it pertains to interfirm cooperation in
R&D.
Use of the term 'open innovation' in reference to the increasing
embrace of external cooperation in a complex world has been promoted in
particular by Henry Chesbrough, adjunct professor and faculty director of the Center for Open Innovation of the Haas School of Business at the University of California, and Maire Tecnimont Chair of Open Innovation at Luiss.
The term was originally referred to as "a paradigm that assumes
that firms can and should use external ideas as well as internal ideas,
and internal and external paths to market, as the firms look to advance
their technology".
More recently, it is defined as "a distributed innovation process based
on purposively managed knowledge flows across organizational
boundaries, using pecuniary and non-pecuniary mechanisms in line with
the organization's business model". This more recent definition acknowledges that open innovation is not solely firm-centric: it also includes creative consumers and communities of user innovators.
The boundaries between a firm and its environment have become more
permeable; innovations can easily transfer inward and outward between
firms and other firms and between firms and creative consumers,
resulting in impacts at the level of the consumer, the firm, an
industry, and society.
Because innovations tend to be produced by outsiders and founders in startups,
rather than existing organizations, the central idea behind open
innovation is that, in a world of widely distributed knowledge,
companies cannot afford to rely entirely on their own research, but
should instead buy or license processes or inventions (i.e. patents)
from other companies. This is termed inbound open innovation.
In addition, internal inventions not being used in a firm's business
should be taken outside the company (e.g. through licensing, joint
ventures or spin-offs). This is called outbound open innovation.
The open innovation paradigm can be interpreted to go beyond just
using external sources of innovation such as customers, rival
companies, and academic institutions, and can be as much a change in the
use, management, and employment of intellectual property as it is in the technical and research driven generation of intellectual property.
In this sense, it is understood as the systematic encouragement and
exploration of a wide range of internal and external sources for
innovative opportunities, the integration of this exploration with firm
capabilities and resources, and the exploitation of these opportunities
through multiple channels.
In addition, as open innovation explores a wide range of internal
and external sources, it could be not just analyzed in the level of
company, but also it can be analyzed at inter-organizational level,
intra-organizational level, extra-organizational and at industrial,
regional and society (Bogers et al., 2017).
Advantages
Open innovation offers several benefits to companies operating on a program of global collaboration:
Reduced cost of conducting research and development
Potential for improvement in development productivity
Incorporation of customers early in the development process
Increase in accuracy for market research and customer targeting
Improve the performance in planning and delivering projects
Potential for synergism between internal and external innovations
Potential for viral marketing
Enhanced digital transformation
Potential for completely new business models
Leveraging of innovation ecosystems
Disadvantages
Implementing a model of open innovation is naturally associated with a number of risks and challenges, including:
Possibility of revealing information not intended for sharing
Potential for the hosting organization to lose their competitive advantage as a consequence of revealing intellectual property
Increased complexity of controlling innovation and regulating how contributors affect a project
Devising a means to properly identify and incorporate external innovation
Realigning innovation strategies to extend beyond the firm in order to maximize the return from external innovation
Models
Government driven
In
the UK, knowledge transfer partnerships (KTP) are a funding mechanism
encouraging the partnership between a firm and a knowledge-based
partner.
A KTP is a collaboration program between a knowledge-based partner
(i.e. a research institution), a company partner and one or more
associates (i.e. recently qualified persons such as graduates). KTP
initiatives aim to deliver significant improvement in business partners’
profitability as a direct result of the partnership through enhanced
quality and operations, increased sales and access to new markets. At
the end of their KTP project, the three actors involved have to prepare a
final report that describes KTP initiative supported the achievement of
the project's innovation goals.
Product platforming
This
approach involves developing and introducing a partially completed
product, for the purpose of providing a framework or tool-kit for
contributors to access, customize, and exploit. The goal is for the
contributors to extend the platform product's functionality while
increasing the overall value of the product for everyone involved.
Readily available software frameworks such as a software development kit (SDK), or an application programming interface (API) are common examples of product platforms. This approach is common in markets with strong network effects
where demand for the product implementing the framework (such as a
mobile phone, or an online application) increases with the number of
developers that are attracted to use the platform tool-kit. The high
scalability of platforming often results in an increased complexity of
administration and quality assurance.
Idea competitions
This
model entails implementing a system that encourages competitiveness
among contributors by rewarding successful submissions. Developer
competitions such as hackathon events and many crowdsourcing
initiatives fall under this category of open innovation. This method
provides organizations with inexpensive access to a large quantity of
innovative ideas, while also providing a deeper insight into the needs
of their customers and contributors.
Customer immersion
While
mostly oriented toward the end of the product development cycle, this
technique involves extensive customer interaction through employees of
the host organization. Companies are thus able to accurately incorporate
customer input, while also allowing them to be more closely involved in
the design process and product management cycle.
Collaborative product design and development
Similarly
to product platforming, an organization incorporates their contributors
into the development of the product. This differs from platforming in
the sense that, in addition to the provision of the framework on which
contributors develop, the hosting organization still controls and
maintains the eventual products developed in collaboration with their
contributors. This method gives organizations more control by ensuring
that the correct product is developed as fast as possible, while
reducing the overall cost of development. Dr. Henry Chesbrough recently supported this model for open innovation in the optics and photonics industry.
Innovation networks
Similarly
to idea competitions, an organization leverages a network of
contributors in the design process by offering a reward in the form of
an incentive.
The difference relates to the fact that the network of contributors are
used to develop solutions to identified problems within the development
process, as opposed to new products. Emphasis needs to be placed on assessing organisational capabilities to ensure value creation in open innovation.
In science
In Austria the Ludwig Boltzmann Gesellschaft started a project named "Tell us!" about mental health issues and used the concept of open innovation to crowdsource research questions.
The institute also launched the first "Lab for Open Innovation in
Science" to teach 20 selected scientists the concept of open innovation
over the course of one year.
Innovation intermediaries
Innovation
intermediaries are persons or organizations that facilitate innovation
by linking multiple independent players in order to encourage
collaboration and open innovation, thus strengthening the innovation
capacity of companies, industries, regions, or nations. As such, they may be key players for the transformation from closed to open modes of innovation.
Versus closed innovation
The paradigm of closed innovation
holds that successful innovation requires control. Particularly, a
company should control the generation of their own ideas, as well as
production, marketing, distribution, servicing, financing, and
supporting. What drove this idea is that, in the early twentieth
century, academic and government institutions were not involved in the
commercial application of science. As a result, it was left up to other
corporations to take the new product development
cycle into their own hands. There just was not the time to wait for the
scientific community to become more involved in the practical
application of science. There also was not enough time to wait for other
companies to start producing some of the components that were required
in their final product. These companies became relatively
self-sufficient, with little communication directed outwards to other
companies or universities.
Throughout the years several factors emerged that paved the way for open innovation paradigms:
The increasing availability and mobility of skilled workers
The growth of the venture capital market
External options for ideas sitting on the shelf
The increasing capability of external suppliers
These four factors have resulted in a new market of knowledge.
Knowledge is not anymore proprietary to the company. It resides in
employees, suppliers, customers, competitors and universities. If
companies do not use the knowledge they have inside, someone else will.
Innovation can be generated either by means of closed innovation or by
open innovation paradigms. There is an ongoing debate on which paradigm will dominate in the future.
Terminology
Modern
research of open innovation is divided into two groups, which have
several names, but are similar in their essence (discovery and
exploitation; outside-in and inside-out; inbound and
outbound). The common factor for different names is the direction of
innovation, whether from outside the company in, or from inside the
company out:
Revealing (non-pecuniary outbound innovation)
This type of open innovation is when a company freely shares its
resources with other partners, without an instant financial reward. The
source of profit has an indirect nature and is manifested as a new type
of business model.
Selling (pecuniary outbound innovation)
In this type of open innovation a company commercialises its
inventions and technology through selling or licensing technology to a
third party.
Sourcing (non-pecuniary inbound innovation)
This type of open innovation is when companies use freely available
external knowledge, as a source of internal innovation. Before starting
any internal R&D project a company should monitor the external
environment in search for existing solutions, thus, in this case,
internal R&D become tools to absorb external ideas for internal
needs.
Acquiring (pecuniary inbound innovation)
In this type of open innovation a company is buying innovation from
its partners through licensing, or other procedures, involving monetary
reward for external knowledge
Versus open source
Open source
and open innovation might conflict on patent issues. This conflict is
particularly apparent when considering technologies that may save lives,
or other open-source-appropriate technologies that may assist in poverty reduction or sustainable development. However, open source
and open innovation are not mutually exclusive, because participating
companies can donate their patents to an independent organization, put
them in a common pool, or grant unlimited license use to anybody. Hence
some open-source initiatives can merge these two concepts: this is the
case for instance for IBM with its Eclipse
platform, which the company presents as a case of open innovation,
where competing companies are invited to cooperate inside an
open-innovation network.
In 1997, Eric Raymond, writing about the open-source software movement, coined the term the cathedral and the bazaar.
The cathedral represented the conventional method of employing a group
of experts to design and develop software (though it could apply to any
large-scale creative or innovative work). The bazaar represented the
open-source approach. This idea has been amplified by a lot of people,
notably Don Tapscott and Anthony D. Williams in their book Wikinomics.
Eric Raymond himself is also quoted as saying that 'one cannot code
from the ground up in bazaar style. One can test, debug, and improve in
bazaar style, but it would be very hard to originate a project in bazaar
mode'. In the same vein, Raymond is also quoted as saying 'The
individual wizard is where successful bazaar projects generally start'.
The next level
In
2014, Chesbrough and Bogers describe open innovation as a distributed
innovation process that is based on purposefully managed knowledge flows
across enterprise boundaries.
Open innovation is hardly aligned with the ecosystem theory and not a
linear process. Fasnacht's adoption for the financial services uses open
innovation as basis and includes alternative forms of mass
collaboration, hence, this makes it complex, iterative, non-linear, and
barely controllable.
The increasing interactions between business partners, competitors,
suppliers, customers, and communities create a constant growth of data
and cognitive tools. Open innovation ecosystems bring together the
symbiotic forces of all supportive firms from various sectors and
businesses that collectively seek to create differentiated offerings.
Accordingly, the value captured from a network of multiple actors and
the linear value chain of individual firms combined, creates the new
delivery model that Fasnacht declares "value constellation".
Open innovation ecosystem
The term Open Innovation Ecosystem
consists of three parts that describe the foundations of the approach
of open innovation, innovation systems and business ecosystems.
While James F. Moore
researched business ecosystems in manufacturing around a specific
business or branch, the open model of innovation with the ecosystem
theory was recently studied in various industries. Traitler et all.
researched it 2010 and used it for R&D,
stating that global innovation needs alliances based on compatible
differences. Innovation partnerships based on sharing knowledge
represents a paradigm shift toward accelerating co‐development of
sustainable innovation. West researched open innovation ecosystems in the software industry,
following studies in the food industry that show how a small firm
thrived and became a business success based on building an ecosystem
that shares knowledge, encourages individuals' growth, and embeds trust
among participants such as suppliers, alumni chef and staff, and food
writers. Other adoptions include the telecom industry or smart cities.
Ecosystems foster collaboration and accelerate the dissemination of knowledge through the network effect, in fact, value creation increases with each actor in the ecosystem, which in turn nurtures the ecosystem as such.
A digital platform is essential to make the innovation ecosystem
work as it aligns various actors to achieve a mutually beneficial
purpose. Parker explained that with platform revolution and described
how networked Markets are transforming the economy.
Basically there are three dimensions that increasingly converge, i.e.
e-commerce, social media and logistics and finance, termed by Daniel
Fasnacht as the golden triangle of ecosystems.
Business ecosystems are increasingly used and drive digital
growth, and pioneering firms in China use their technological
capabilities and link client data to historical transactions and social
behaviour to offer tailored financial services among luxury goods or
health services. Such open collaborative environment changes the client
experience and adds value to consumers. The drawback is that it is also
threatening incumbent banks from the U.S. and Europe due to its legacies
and lack of agility and flexibility.
Biobased economy, bioeconomy or biotechonomy refers to economic activity involving the use of biotechnology and biomass
in the production of goods, services, or energy. The terms are widely
used by regional development agencies, national and international
organizations, and biotechnology companies. They are closely linked to
the evolution of the biotechnology industry and the capacity to study,
understand, and manipulate genetic material that has been possible due
to scientific research and technological development. This includes the
application of scientific and technological developments to agriculture,
health, chemical, and energy industries.
A video by New Harvest / Xprize
explaining the development of cultured meat and a "post-animal
bio-economy, driven by lab grown protein (meat, eggs, milk)". (Runtime
3:09)
The terms bioeconomy (BE) and bio-based
economy (BBE) are sometimes used interchangeably. However, it is worth
to distinguish them: the biobased economy takes into consideration the
production of non-food goods, whilst bioeconomy covers both bio-based
economy and the production and use of food and feed.
Origins and definitions
Bioeconomy
has large variety of definitions. The bioeconomy comprises those parts
of the economy that use renewable biological resources from land and sea
– such as crops, forests, fish, animals and micro-organisms – to
produce food, health, materials, products, textiles and energy.
In 2010 it was defined in the report “The Knowledge Based
Bio-Economy (KBBE) in Europe: Achievements and Challenges” by Albrecht
& al. as follows: The bio-economy is the sustainable production
and conversion of biomass, for a range of food, health, fibre and
industrial products and energy, where renewable biomass encompasses any
biological material to be used as raw material.”
The First Global Bioeconomy Summit
in Berlin in November 2015 defines bioeconomy as “knowledge-based
production and utilization of biological resources, biological processes
and principles to sustainably provide goods and services across all
economic sectors”. According to the summit, bioeconomy involves three
elements: renewable biomass, enabling and converging technologies, and
integration across applications concerning primary production (i.e. all
living natural resources), health (i.e. pharmaceuticals and medical
devices), and industry (i.e. chemicals, plastics, enzymes, pulp and
paper, bioenergy).
The term 'biotechonomy' was used by Juan Enríquez and Rodrigo Martinez at the Genomics Seminar in the 1997 AAAS meeting. An excerpt of this paper was published in Science."
An important aspect of the bioeconomy is understanding mechanisms
and processes at the genetic, molecular, and genomic levels, and
applying this understanding to creating or improving industrial
processes, developing new products and services, and producing new
energy. Bioeconomy aims to reduce our dependence on fossil natural
resources, to prevent biodiversity loss and to create new economic growth and jobs that are in line with the principles of sustainable development.
History
Enríquez
and Martinez' 2002 Harvard Business School working paper, "Biotechonomy
1.0: A Rough Map of Biodata Flow", showed the global flow of genetic
material into and out of the three largest public genetic databases: GenBank, EMBL and DDBJ.
The authors then hypothesized about the economic impact that such data
flows might have on patent creation, evolution of biotech startups and
licensing fees. An adaptation of this paper was published in Wired magazine in 2003.
The term 'bioeconomy' became popular from the mid-2000s with its adoption by the European Union and Organisation for Economic Co-operation and Development as a policy agenda and framework to promote the use of biotechnology to develop new products, markets, and uses of biomass.
Since then, both the EU (2012) and OECD (2006) have created dedicated
bioeconomy strategies, as have an increasing number of countries around
the world.
Often these strategies conflate the bioeconomy with the term 'bio-based
economy'. For example, since 2005 the Netherlands has sought to promote
the creation of a biobased economy.
Pilot plants have been started i.e. in Lelystad (Zeafuels), and a
centralised organisation exists (Interdepartementaal programma biobased
economy), with supporting research (Food & Biobased Research) being
conducted. Other European countries have also developed and implemented bioeconomy or bio-based economy policy strategies and frameworks.
In 2012 president Barack Obama of the USA announced intentions to encourage biological manufacturing methods, with a National Bioeconomy Blueprint.
Aims
Global
population growth and over consumption of many resources are causing
increasing environmental pressure and climate change. Bioeconomy tackles
with these challenges. It aims to ensure food security and to promote
more sustainable natural resource use as well as to reduce the
dependence on non-renewable resources, e.g. fossil natural resources and
minerals. In some extent bioeconomy also helps economy to reduces
greenhouse gas emissions and assists in mitigating and adapting to
climate change.
Organisms, ranging from bacteria over yeasts up to plants are used for production of enzymatic catalysis. Genetically modified bacteria have been used to produce insulin, artemisinic acid was made in engineered yeast. Some bioplastics (based on polyhydroxylbutyrate or polyhydroxylalkanoates are produced from sugar using genetically modified microbes.
Genetically modified organisms are also used for the production of biofuels. Biofuels are a type of Carbon-neutral fuel.
Research is also being done towards CO2 fixation using a synthetic metabolic pathway. By genetically modifying E. coli bacteria so as to allow them to consume CO2, the bacterium may provide the infrastructure for the future renewable production of food and green fuels.
One of the organisms (Ideonella sakaiensis) that is able to break down PET (a plastic) into other substances has been genetically modified
to break down PET even faster and also break down PEF. Once plastics
(which are normally non-biodegradable) are broken down and recycled into
other substances (i.e. biomatter in the case of Tenebrio molitor larvae) it can be used as an input for other animals.
Genetically modified crops are also used. Genetically modified energy crops
for instance may provide some additional advantages such as reduced
associated costs (i.e. costs during the manufacturing process
) and less water use. One example are trees have been genetically
modified to either have less lignin, or to express lignin with
chemically labile bonds.
With genetically modified crops however, there are still some challenges involved (hurdles to regulatory approvals, market adoption and public acceptance).
Fields
According
to European Union Bioeconomy Strategy updated in 2018 the bioeconomy
covers all sectors and systems that rely on biological resources
(animals, plants, micro-organisms and derived biomass, including organic
waste), their functions and principles. It covers all primary
production and economic and industrial sectors that base on use,
production or processing biological resources from agriculture, forestry, fisheries and aquaculture.
The product of bioeconomy are typically food, feed and other biobased
products, bioenergy and services based on biological resources. The
bioeconomy aims to drive towards sustainability, circularity as well as the protection of the environment and will enhance biodiversity.
In some definitions, bioeconomy comprises also ecosystem services
that are services offered by the environment, including binding carbon
dioxide and opportunities for recreation. Another key aspect of the
bioeconomy is not wasting natural resources but using and recycling them
efficiently.
According to EU Bioeconomy Report 2016,
the bioeconomy brings together various sectors of the economy that
produce, process and reuse renewable biological resources (agriculture,
forestry, fisheries, food, bio-based chemicals and materials and
bioenergy).
However, not all synthetic nutrition products are animal food products – for instance, as of 2021 there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on bioeconomy agriculture are:
Microbial food cultures and genetically engineered microbial production (e.g. of spider silk or solar-energy-based protein powder)
Controlled self-assembly of plant proteins (e.g. of spider silk similar plant-proteins-based plastics alternatives)
One example of a product highly specific to the bioeconomy that is widely available is algae oil which is a dietary supplement that could substitute fish oil supplements.
Biobased applications, research and development of waste management may form a part of the bioeconomy. Bio-based recycling (e-waste, plastics recycling,
etc.) is linked to waste management and relevant standards and
requirements of production and products. Some of the recycling of waste
may be biomining and some biomining could be applied beyond recycling.
For example, in 2020, biotechnologists reported the genetically engineered refinement and mechanical description of synergistic enzymes – PETase, first discovered in 2016, and MHETase of Ideonella sakaiensis – for faster depolymerization of PET and also of PEF, which may be useful for depollution, recycling and upcycling of mixed plastics along with other approaches.
Such approaches may be more environment-friendly as well as
cost-effective than mechanical and chemical PET-recycling, enabling
circular plastic bio-economy solutions via systems based on engineered
strains. Moreover, microorganisms could be employed to mine useful elements from basalt rocks via bioleaching.
Medicine, nutritional science and the health economy
In 2020, the global industry for dietary supplements was valued at $140.3 billion by a "Grand View Research" analysis. Certain parts of the health economy may overlap with the bioeconomy, including anti-aging- and life extension-related products and activities, hygiene/beauty products, functional food, sports performance related products and bio-based tests (such as of one's microbiota) and banks (such as stool banks and DNA databases), all of which can in turn be used for individualized interventions,
monitoring as well as for the development of new products. The
pharmaceutical sector, including the research and development of new antibiotics, can also be considered to be a bioeconomy sector.
The forest bioeconomy is based on forests
and their natural resources, and covers a variety of different industry
and production processes. Forest bioeconomy includes, for example, the
processing of forest biomass
to provide products relating to, energy, chemistry, or the food
industry. Thus, forest bioeconomy covers a variety of different
manufacturing processes that are based on wood material and the range of
end products is wide.
Besides different wood-based products, recreation, nature tourism and game are a crucial part of forest bioeconomy. Carbon sequestration and ecosystem services are also included to the concept of forest bioeconomy.
Pulp, paper, packaging materials and sawn timber are the traditional products of the forest industry.
Wood is also traditionally used in furniture and construction
industries. But in addition to these, as a renewable natural resource,
ingredients from wood can be valorised into innovative bioproducts,
alongside a range of conventional forest industry products. Thus,
traditional mill sites of large forest industry companies, for example
in Finland, are in the process of becoming biorefineries.
In different processes, forest biomass is used to produce for example,
textiles, chemicals, cosmetics, fuels, medicine, intelligent packaging,
coatings, glues, plastics, food and feed.
The blue bioeconomy covers businesses that are based on the
sustainable use of renewable aquatic resources as well water related
expertise areas. It covers the development and marketing of blue
bioeconomy products and services. In that respect, the key sectors
include business activities based on water expertise and technology,
water-based tourism, making use of aquatic biomass, and the value chain
of fisheries. Furthermore, the immaterial value of aquatic natural
resources is also very high. Water areas have also other values but the
platform of economic activities. It provides human well-being,
recreation and health.
According to the European Union the blue bioeconomy has the focus
on aquatic or marine environments, especially, on novel aquaculture
applications, including non-food, food and feed.
According to World Bioenergy Association
17,8 % out of gross final energy consumption was covered with
renewables energy. Among renewable energy sources, bioenergy (energy
from bio-based sources) is the largest renewable energy. In 2017,
bioenergy accounted for 70% of the renewable energy consumption. (Global bioenergy statistics 2019)
The role of bioenergy varies in different countries and
continents. In Africa it is the most important energy sources with the
share of 96%. Bioenergy has significant shares in energy production in
Americas (59%), Asia (65%) and Europe (59%). The bioenergy is produced
out of a large variety of biomass
from forestry, agriculture and waste and side streams of industries to
produce useful end products (pellets, wood chips, bioethanol, biogas and
biodiesel) ending up for electricity, heat and transportation fuel
around the world.
Biomass is renewable nature resource but it is still limited
resource. Globally there are huge resources, but it the environmental,
social and economic aspect are limiting the use. Biomass,
however, can play important role and source of products towards
low-carbon solutions in the field of customer supplies, energy, food and
feed. In practice, there are many competing uses.
The biobased economy uses first-generation biomass
(crops), second-generation biomass (crop refuge), and third-generation
biomass (seaweed, algae). Several methods of processing are then used
(in biorefineries) to gather the most out of the biomass. This includes techniques such as
Anaerobic digestion is generally used to produce biogas, fermentation of sugars produces ethanol, pyrolysis is used to produce pyrolysis-oil (which is solidified biogas), and torrefaction is used to create biomass-coal. Biomass-coal
and biogas is then burnt for energy production, ethanol can be used as a
(vehicle)-fuel, as well as for other purposes, such as skincare products.
Getting the most out of the biomass
For
economic reasons, the processing of the biomass is done according to a
specific pattern (a process called cascading). This pattern depends on
the types of biomass used. The whole of finding the most suitable
pattern is known as biorefining.
A general list shows the products with high added value and lowest
volume of biomass to the products with the lowest added value and
highest volume of biomass:
fine chemicals/medicines
food
chemicals/bioplastics
transport fuels
electricity and heat
Other fields and applications
Bioproducts or bio-based products are products that are made from biomass.
The term “bioproduct” refers to a wide array of industrial and
commercial products that are characterized by a variety of properties,
compositions and processes, as well as different benefits and risks.
Bio-based products are developed in order to reduce dependency on
fossil fuels and non-renewable resources. To achieve this, the key is
to develop new bio-refining technologies to sustainably transform
renewable natural resources into bio-based products, materials and
fuels, e.g.
Nanoparticles, artificial cells and micro-droplets
Synthetic biology can be used for creating nanoparticles which can be used for drug-delivery as well as for other purposes. Complementing research and development seeks to and has created synthetic cells that mimics functions of biological cells. Applications include medicine such as designer-nanoparticles that make blood cells eat away – from the inside out – portions of atherosclerotic plaque that cause heart attacks. Synthetic micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroidmicrobial reactors, for example, could be used to produce hydrogen as hydrogen economy biotechnology.
There
is a potential for biobased-production of building materials
(insulation, surface materials, etc.) as well as new materials in
general (polymers, plastics, composites, etc.). Photosynthetic microbial cells have been used as a step to synthetic production of spider silk.
Bioplastics
Bioplastics
are not just one single material. They comprise a whole family of
materials with different properties and applications. According to
European Bioplastics, a plastic material is defined as a bioplastic if
it is either bio-based plastic, biodegradable plastic,
or is a material with both properties. Bioplastics have the same
properties as conventional plastics and offer additional advantages,
such as a reduced carbon footprint or additional waste management
options, such as composting.
Bioplastics are divided into three main groups:
Bio-based or partially bio-based non-biodegradable plastics such
as bio-based PE, PP, or PET (so-called drop-ins) and bio-based
technical performance polymers such as PTT or TPC-ET
Plastics that are both bio-based and biodegradable, such as PLA and PHA or PBS
Plastics that are based on fossil resources and are biodegradable, such as PBAT
Additionally, new materials such as PLA, PHA, cellulose or starch-based materials offer solutions with completely new functionalities such as biodegradability
and compostability, and in some cases optimized barrier properties.
Along with the growth in variety of bioplastic materials, properties
such as flexibility, durability, printability, transparency, barrier,
heat resistance, gloss and many more have been significantly enhanced.
Bioplastics have been made from sugarbeet, by bacteria.
Examples of bioplastics
Paptic:
There are packaging materials which combine the qualities of paper and
plastic. For example, Paptic is produced from wood-based fibre that
contains more than 70% wood. The material is formed with foam-forming
technology that saves raw material and improves the qualities of the
material. The material can be produced as reels, which enables it to be
delivered with existing mills. The material is spatter-proof but is
decomposed when put under water. It is more durable than paper and
maintains its shape better than plastic. The material is recycled with
cardboards.
Examples of bio-composites
Sulapac
tins are made from wood chips and biodegradable natural binder and they
have features similar to plastic. These packaging products tolerate
water and fats, and they do not allow oxygen to pass. Sulapac products
combine ecology, luxury and are not subject to design limitations.
Sulapac can compete with traditional plastic tins by cost and is
suitable for the same packing devices.
Woodio produces wood composite sinks and other bathroom
furniture. The composite is produced by moulding a mixture of wood chips
and crystal clear binder. Woodio has developed a solid wood composite
that is entirely waterproof. The material has similar features to
ceramic, but can be used as energy after use. unlike ceramic waste.
Solid wood composite is hard and can be moulded with wooden tools.
Woodcast is a renewable and biodegradable casting material.
It is produced from woodchips and biodegradable plastic. It is hard and
durable in room temperature but when heated is flexible and self-sticky.
Woodcast can be applied to all plastering and supporting elements. The
material is breathable and X-ray transparent. It is used in plastering
and in occupational therapy and can be moulded to any anatomical shape.
Excess pieces can be reused: used casts can be disposed of either as
energy or biowaste. The composite differs from traditional lime cast in
that it doesn’t need water and it is non-toxic. Therefore gas-masks,
gauntlets or suction fans are not required when handling the cast.
Plastic packages or plastic components are sometimes part of a valid
environmental solution. Other times, alternatives to plastic are
desiredable.
Materials have been developed or used for packaging without
plastics, especially for use-cases in which packaging can't be
phased-out – such as with policies for national grocery store
requirements – for being needed for preserving food products or other
purposes.
A plant proteins-based biodegradable packaging alternative to plastic was developed based on research about spider silk which is known for its high strength and similar on the molecular level.
Researchers at the Agricultural Research Service are looking into using dairy-based films as an alternative to petroleum-based packaging. Instead of being made of synthetic polymers, these dairy-based films would be composed of proteins such as casein and whey, which are found in milk. The films would be biodegradable
and offer better oxygen barriers than synthetic, chemical-based films.
More research must be done to improve the water barrier quality of the
dairy-based film, but advances in sustainable packaging are actively
being pursued.
Sustainable packaging policy cannot be individualized by a
specific product. Effective legislation would need to include
alternatives to many products, not just a select few; otherwise, the
positive impacts of sustainable packing will not be as effective as they
need in order to propel a significant reduction of plastic packaging.
Finding alternatives can reduce greenhouse gas emissions from
unsustainable packaging production and reduce dangerous chemical
by-products of unsustainable packaging practices.
Bio-Based Plastics
Another
alternative to commonly used petroleum plastics are bio-based plastics.
Examples of bio-based plastics include natural biopolymers and polymers
synthesized from natural feedstock monomers, which can be extracted
from plants, animals, or microorganisms. A polymer that is bio-based and
used to make plastic materials is not necessarily compostable or
bio-degradable. Natural biopolymers can be often biodegraded in the
natural environment while only a few bio-based monomer bio-based
plastics can be. Bio-based plastics are a more sustainable option in
comparison to their petroleum based counterparts, yet they only account
for 1% of plastics produced annually as of 2020.
Chitosan
Chitosan
is a studied biopolymer that can be used as a packaging alternative
that increases shelf life and reduces the use of synthetic plastics.
Chitosan is a polysaccharide that is obtained through the deacetylation of chitin, the second most abundant polysaccharide on Earth derived from the non-edible portions of marine invertebrates.
The increased use of chitosan has the possibility to reduce food waste
and the waste from food packaging. Chitosan is compiled of antimicrobial
activities and film forming properties which make it biodegradable and
deter growth of spoilage. In comparison to degrading synthetic plastics,
that may take years, biopolymers such as chitosan can degrade in weeks.
Antimicrobial packaging includes techniques such as modified atmospheric packaging
that reduce activities of microbes and bacterial growth. Chitosan as an
alternative promotes less food waste and less reliance on
non-degradable plastic materials.
The textile industry,
or certain activities and elements of it, could be considered to be a
strong global bioeconomy sector. Textiles are produced from natural
fibres, regenerated fibres and synthetic fibres (Sinclair 2014). The
natural fibre textile industry is based on cotton, linen, bamboo, hemp,
wool, silk, angora, mohair and cashmere.
Activities related to textile production and processing that more
clearly fall under the domain of the bioeconomy are developments such
as the biofabrication of leather-like material using fungi.
Textile fibres can be formed in chemical processes from bio-based
materials. These fibres are called bio-based regenerated fibres. The
oldest regenerated fibres are viscose and rayon, produced in the 19th
century. The first industrial processes used a large amount of wood as
raw material, as well as harmful chemicals and water. Later the process
of regenerating fibres developed to reduce the use of raw materials,
chemicals, water and energy.
In the 1990s the first more sustainable regenerated fibres, e.g.
Lyocell, entered the market with the commercial name of Tencel. The
production process uses wood cellulose and it processes the fibre
without harmful chemicals.
The next generation of regenerated fibres are under development.
The production processes use less or no chemicals, and the water
consumption is also diminished.
The bioeconomy has largely been associated with visions of "green growth".
A study found that a "circular bioeconomy" may be "necessary to build a
carbon neutral future in line with the climate objectives of the Paris Agreement".
However, some are concerned that with a focus or reliance on
technological progress a fundamentally unsustainable socioeconomic model
might be maintained rather than be changed.
Some are concerned it that may not lead to a ecologization of the
economy but to an economization of the biological, "the living" and
caution that potentials of non-bio-based techniques to achieve greater
sustainability need to be considered.
A study found that the, as of 2019, current EU interpretation of the
bioeconomy is "diametrically opposite to the original narrative of
Baranoff and Georgescu-Roegen that told us that expanding the share of
activities based on renewable resources in the economy would slow down
economic growth and set strict limits on the overall expansion of the
economy". Furthermore, some caution that "Silicon Valley and food corporations" could use bioeconomy technologies for greenwashing and monopoly-concentrations.
The bioeconomy, its potentials, disruptive new modes of production and
innovations may distract from the need for systemic structural
socioeconomic changes and provide a false illusion of technocapitalistutopianism/optimism that suggests technological fixes may make it possible to sustain contemporary patterns and structures.
Many farmers depend on conventional methods of producing crops and many of them live in developing economies. Cellular agriculture for products such as synthetic coffee could, if the contemporary socioeconomic context (the socioeconomic system's
mechanisms such as incentives and resource distribution mechanisms like
markets) remains unaltered (e.g. in nature, purposes, scopes, limits
and degrees), threaten their employment and livelihoods as well as the
respective nation's economy and social stability. A study concluded that
"given the expertise required and the high investment costs of the
innovation, it seems unlikely that cultured meat immediately benefits
the poor in developing countries" and emphasized that animal agriculture
is often essential for the subsistence for farmers in poor countries. However, not only developing countries may be affected.
Patents, intellectual property and monopolies
Some
observers worry that the bioeconomy will become as opaque and
accountability-free as the industry it aims to replace (e.g. the current
food system). Its core products may be mass-produced, nutritionally dubious meat sold at homogeneous fast-food joints.
The medical community has warned that gene patents can inhibit the practice of medicine and progress of science.
This can also apply to other areas where patents and private
intellectual property licenses are being used, often entirely preventing
the use and continued development of knowledge and techniques for many
years or decades. On the other hand, some worry that without
intellectual property protection as the type of R&D-incentive,
particularly to current degrees and extents, companies would no longer
have the resources or motives/incentives to perform competitive, viable
biotech research – as otherwise they may not be able to generate
sufficient returns from initial R&D investment or less returns than
from other expenditures that are possible. "Biopiracy"
refers to "the use of intellectual property systems to legitimize the
exclusive ownership and control over biological resources and biological
products that have been used over centuries in non-industrialized
cultures".
Rather than leading to sustainable, healthy, inexpensive, safe,
accessible food being produced with little labor locally – after knowledge- and technology transfer and timely, efficient innovation – the bioeconomy may lead to aggressive monopoly-formation and exacerbated inequality. For instance, while production costs may be minimal, costs – including of medicine – may be high.
Innovation management, public spending and governance
It has been argued that public investment would be a tool governments
should use to regulate and license cellular agriculture. Private firms
and venture capital would likely seek to maximise investor value rather
than social welfare.
Moreover, radical innovation is considered to be more risky, "and
likely involves more information asymmetry, so that private financial
markets may imperfectly manage these frictions". Governments may also
help to coordinate "since several innovators may be needed to push the
knowledge frontier and make the market profitable, but no single company
wants to make the early necessary investments". They could also help
innovators that lack the network "to naturally obtain the visibility and
political influence necessary to obtain public funds" and could help
determine relevant laws.
In popular media
Biopunk is a genre of science fiction, so called due to similarity with cyberpunk, that often thematizes the bioeconomy as well as its issues and technologies. The novel The Windup Girl portrays a world of society driven by a ruthless bioeconomy and ailing under climate change. In the more recent novel Change Agent
prevalent black market clinics offer wealthy people unauthorized
genetic human enhancement services and custom narcotics are 3D-printed
locally or smuggled with soft robots. Solarpunk
is another emerging genre that focuses on the relationship between
human societies and the environment and also adresses many of the
bioeconomy's issues and technologies such as genetic engineering,
synthetic meat and commodification.