Science 2.0 is a suggested new approach to science that uses information-sharing and collaboration made possible by network technologies. It is similar to the open research and open science movements and is inspired by Web 2.0 technologies. Science 2.0 stresses the benefits of increased collaboration between scientists. Science 2.0 uses collaborative tools like wikis, blogs and video journals to share findings, raw data and "nascent theories" online. Science 2.0 benefits from openness and sharing, regarding papers and research ideas and partial solutions.
A general view is that Science 2.0 is gaining traction with websites beginning to proliferate,
yet at the same time there is considerable resistance within the
scientific community about aspects of the transition as well as
discussion about what, exactly, the term means. There are several views
that there is a "sea change" happening in the status quo of scientific publishing, and substantive change regarding how scientists share research data.
There is considerable discussion in the scientific community about
whether scientists should embrace the model and exactly how Science 2.0
might work, as well as several reports that many scientists are slow to
embrace collaborative methods and are somewhat "inhibited and slow to adopt a lot of online tools."
Definitions
Current model
Emerging model
Research done privately; then submitted to journals; then peer-reviewed by gatekeepers in major journals; published
Research data shared during discovery stages; ideas shared; scientists collaborate; then findings are disseminated online
Scientific literature behind paywalls online
Scientific discoveries free online
Credit established by journal name or journal impact factor.
Credit established by citation count, number of views or downloads.
Data is private until publication
Data is shared before publication
Papers generally protected by copyright
Many different licenses possible: copyright, public domain, Creative Commons 3.0, etc.
Publishers raise funds by charging for access to content
Publishers seek alternative funding models
Journal article summaries available online after publication
Share methods, data, findings via blogs, social networking sites, wikis, computer networking, Internet, video journals
The term has many meanings and continues to evolve in scientific
parlance. It not only describes what is currently happening in science,
but describes a direction in which proponents believe science should
move towards, as well as a growing number of websites which promote free
scientific collaboration.
The term Science 2.0 suggests a contrast between traditional ways of doing science, often denoted Science 1.0, with more collaborative approaches, and suggests that the new forms of science will work with Web 2.0 technologies. One description from Science
is that Science 2.0 uses the "networking power of the internet to
tackle problems with multiple interacting variables - the problems, in
other words, of everyday life." A different and somewhat controversial view is that of Ben Shneiderman,
who suggested that Science 2.0 combines hypothesis-based inquiries with
social science methods, partially for the purpose of improving those
new networks.
While the term describes websites for sharing scientific
knowledge, it also includes efforts by existing science publishers to
embrace new digital tools, such as offering areas for discussions
following published online articles. Sometimes it denotes open access
which, according to one view, means that the author continues to hold
the copyright but that others can read it and use it for reasonable
purposes, provided that the attribution is maintained. Most online scientific literature is behind paywalls, meaning that a person can find the title of an article on Google but they can not read the actual article. People who can access these articles are generally affiliated with a university or secondary school or library or other educational institution, or who pay on a per-article or subscription basis.
Traditional scientific journals are
part of this social evolution too, innovating ways to engage scientists
online and enable global collaboration and conversation. Even the
187-year-old Annals of the New York Academy of Sciences has joined the
digital age. The Academy now permits free public access to selected
online content and has digitized every volume dating back to 1823.
— Adrienne J. Burke in Seed Magazine, 2012
One view is that Science 2.0 should include an effort by scientists to offer papers in non-technical language, as a way of reaching out to non-scientists. For others, it includes building vast databases of case histories. There is a sense in which Science 2.0
indicates a general direction for scientific collaboration, although
there is little clarity about how exactly this might happen. One aim is
to "make scientific collaboration as easy as sharing videos of trips
home from the dentist," according to one view.
Closely related terms are "cyberscience" focussing on scientists communicating in the cyberspace and "cyberscience 2.0" expanding the notion to the emerging trend of academics using Web 2.0 tools.
History and background
The rise of the Internet has transformed many activities such as retailing and information searching. In journalism, Internet technologies such as blogging, tagging and social networking
have caused many existing media sources such as newspapers to "adopt
whole new ways of thinking and operating," according to a report in Scientific American in 2008.
The idea is that while the Internet has transformed many aspects of
life, it has not changed scientific research as much as it could. While
firms such as eBay, Amazon and Netflix have changed consumer retailing,
and online patient-centered medical data has enabled better health care,
Science 2.0 advocate Ben Shneiderman said:
It's time for researchers in
science to take network collaboration like this to the next phase and
reap the potential intellectual and societal payoffs.
— Ben Shneiderman, 2008
According to one view, a similar web-inspired transformation that has happened to other areas is now happening to science.
The general view is that science has been slower than other areas to
embrace the web technology, but that it is beginning to catch up.
Before the Internet, scientific publishing has been described as a "highly integrated and controlled process."
Research was done in private. Next, it was submitted to scientific
publications and reviewed by editors and gatekeepers and other
scientists. Last, it was published. This has been the traditional pathway of scientific advancement, sometimes dubbed Science 1.0.
Established journals provided a "critical service", according to one view. Publications such as Science and Nature
have large editorial staffs to manage the peer-review process as well
as have hired fact-checkers and screeners to look over submissions.
These publications get revenue from subscriptions, including online
ones, as well as advertising revenue and fees paid by authors. According to advocates of Science 2.0, however, this process of paper-submission and review was rather long. Detractors complained that the system is "hidebound, expensive and elitist", sometimes "reductionist", as well as being slow and "prohibitively costly". Only a select group of gatekeepers—those in charge of the traditional publications—limited the flow of information. Proponents of open science claimed that scientists could learn more and learn faster if there is a "friction-free collaboration over the Internet."
Yet there is considerable resistance within the scientific
community to a change of approach. The act of publishing a new finding
in a major journal has been at the "heart of the career of scientists,"
according to one view, which elaborated that many scientists would be
reluctant to sacrifice the "emotional reward" of having their
discoveries published in the usual, traditional way. Established scientists are often loath to switch to an open-source model, according to one view.
Timo Hannay explained that the traditional publish-a-paper model, sometimes described as "Science 1.0", was a workable one but there need to be other ways for scientists to make contributions and get credit for their work:
The unit of contribution to the
scientific knowledge base has become the paper. Journals grew up as a
means for scientists to be able to share their discoveries and ideas.
The incentive for doing so was that by publishing in journals their
contributions would be recognized by citation and other means. So you
have this pact: be open with your ideas and share them through journals
and you will get credit... There are all kinds of ways in which
scientists can contribute to the global endeavor. ... The incentive
structure has not caught up with what we really want scientists to do.
— Timo Hannay, 2012
In 2008, a scientist at the University of Maryland named Ben Shneiderman wrote an editorial entitled Science 2.0.
Shneiderman argued that Science 2.0 was about studying social
interactions in the "real world" with study of e-commerce, online
communities and so forth. A writer in Wired Magazine
criticized Shneiderman's view, suggesting that Shneiderman's call for
more collaboration, more real-world tests, and more progress should not
be called "Science 2.0" or "Science 1.0" but simply science.
There are reports that established journals are moving towards
greater openness. Some help readers network online; others enable
commenters to post links to websites; others make papers accessible
after a certain period of time has elapsed.
But it remains a "hotly debated question", according to one view,
whether the business of scientific research can move away from the model
of "peer-vetted, high-quality content without requiring payment for
access." The topic has been discussed in a lecture series at the California Institute of Technology. Proponent Adam Bly thinks that the key elements needed to transform Science 2.0 are "vision" and "infrastructure":
Open science is not this maverick
idea; it’s becoming reality. About 35 percent of scientists are using
things like blogs to consume and produce content. There is an explosion
of online tools and platforms available to scientists, ranging from Web
2.0 tools modified or created for the scientific world to Web sites that
are doing amazing things with video, lab notebooks, and social
networking. There are thousands of scientific software programs freely
available online and tens of millions of science, technology, and math
journal articles online. What’s missing is the vision and infrastructure
to bring together all of the various changes and new players across
this Science 2.0 landscape so that it’s simple, scalable, and
sustainable—so that it makes research better.
risk others will copy preliminary work to get credit, patents, money
faster development
how will reviewers and editors get paid?
wider access
it is not clear how Science 2.0 will work
diverse applications: homeland security, medical care, environment etc.
needs infrastructure
easier
lets other scientists see results instantly and comment
There are numerous examples of more websites offering opportunities for scientific collaboration.
Public Library of Science. This project, sometimes termed PLoS, is a nonprofit open-access scientific publishing project aimed at creating a library of open access journals and other scientific literature under an open content license. By 2012, it publishes seven peer reviewed journals.
It makes scientific papers immediately available online without charges
for access or restrictions on passing them along, provided that the
authors and sources are properly cited with the Creative Commons Attribution License.
According to one report, the PLoS has gained "pretty wide acceptance"
although many researchers in biomedicine still hope to be published in
established journals such as Nature, Cell, and Science, according to one report. PLoS publishes 600 articles a month in 2012.
arXiv, pronounced archive, is an online-accessible archive for electronic preprints of scientific papers in the fields of mathematics, physics, astronomy, computer science, quantitative biology, statistics, and quantitative finance.
Galaxy Zoo is an online astronomy project which invites members of the public to assist in the morphological classification of large numbers of galaxies. It has been termed a citizen science project. The information has led to a substantial increase in scientific papers, according to one account.
A website entitled Science 2.0 lets scientists share
information. It has been cited by numerous publications, many of which
have been written stories with links to Science 2.0 articles such as USA Today, CNN, the Wall Street Journal, the New York Times, and others. The Science 2.0 topics included neutrino interactions, cosmic rays, the human eye's evolution, the relation between sex and marital happiness for elderly couples, human evolution, hearing loss, and other topics.
Some examples of pioneering use of Science 2.0 to foster biodiversity surveys were popularized by Robert Dunn, including urban Arthropods and human body bacteria.
OpenWorm
is a collaborative research project with several publications that aims
to simulate the nervous system, body mechanics, and environment of the C. elegans worm.
Creative Commons (CC) is an American non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses known as Creative Commons licenses free of charge to the public. These licenses allow creators to communicate which rights they reserve, and which rights they waive
for the benefit of recipients or other creators. An easy-to-understand
one-page explanation of rights, with associated visual symbols, explains
the specifics of each Creative Commons license. Creative Commons
licenses do not replace copyright, but are based upon it. They replace
individual negotiations for specific rights between copyright owner
(licensor) and licensee,
which are necessary under an "all rights reserved" copyright
management, with a "some rights reserved" management employing
standardized licenses for re-use cases where no commercial compensation
is sought by the copyright owner. The result is an agile, low-overhead
and low-cost copyright-management regime, benefiting both copyright
owners and licensees.
The organization was founded in 2001 by Lawrence Lessig, Hal Abelson, and Eric Eldred with the support of Center for the Public Domain. The first article in a general interest publication about Creative Commons, written by Hal Plotkin, was published in February 2002. The first set of copyright licenses was released in December 2002.
The founding management team that developed the licenses and built the
Creative Commons infrastructure as we know it today included Molly Shaffer Van Houweling, Glenn Otis Brown, Neeru Paharia, and Ben Adida.
In 2002 the Open Content Project, a 1998 precursor project by David A. Wiley, announced the Creative Commons as successor project and Wiley joined as CC director. Aaron Swartz played a role in the early stages of Creative Commons, as did Matthew Haughey.
As of May 2018 there were an estimated 1.4 billion works licensed under the various Creative Commons licenses. Wikipedia uses one of these licenses. As of May 2018, Flickr alone hosts over 415 million Creative Commons licensed photos.
Creative Commons is governed by a board of directors. Their
licenses have been embraced by many as a way for creators to take
control of how they choose to share their copyrighted works.
A sign in a pub in Granada notifies customers that the music they are listening to is freely distributable under a Creative Commons license.
Made with Creative Commons, a 2017 book describing the value of CC licenses.
Creative Commons has been described as being at the forefront of the copyleft movement, which seeks to support the building of a richer public domain by providing an alternative to the automatic "all rights reserved" copyright, and has been dubbed "some rights reserved". David Berry and Giles Moss have credited Creative Commons with generating interest in the issue of intellectual property and contributing to the re-thinking of the role of the "commons" in the "information age".
Beyond that, Creative Commons has provided "institutional, practical
and legal support for individuals and groups wishing to experiment and
communicate with culture more freely."
Creative Commons attempts to counter what Lawrence Lessig, founder of Creative Commons, considers to be a dominant and increasingly restrictive permission culture.
Lessig describes this as "a culture in which creators get to create
only with the permission of the powerful, or of creators from the past."
Lessig maintains that modern culture is dominated by traditional
content distributors in order to maintain and strengthen their
monopolies on cultural products such as popular music and popular
cinema, and that Creative Commons can provide alternatives to these
restrictions.
Creative Commons Network
Until
April 2018 Creative Commons had over 100 affiliates working in over 75
jurisdictions to support and promote CC activities around the world. In 2018 this affiliate network has been restructured into a network organisation. The network no longer relies on affiliate organisation but on individual membership organised in Chapter.
South Korea
Creative Commons Korea (CC Korea)
is the affiliated network of Creative Commons in South Korea. In March
2005, CC Korea was initiated by Jongsoo Yoon (in Korean: 윤종수), a
Presiding Judge of Incheon District Court, as a project of Korea
Association for Infomedia Law (KAFIL). The major Korean portal sites,
including Daum and Naver, have been participating in the use of Creative
Commons licences. In January 2009, the Creative Commons Korea
Association was consequently founded as a non-profit incorporated
association. Since then, CC Korea has been actively promoting the
liberal and open culture of creation as well as leading the diffusion of
Creative Common in the country.
Creative Commons Korea
Creative Commons Asia Conference 2010
Bassel Khartabil
Bassel Khartabil
was a Palestinian Syrian open source software developer and has served
as project lead and public affiliate for Creative Commons Syria.
From March 15, 2012 he was detained by the Syrian government in
Damascus at Adra Prison. On October 17, 2015 Creative Commons Board of
Directors approved a resolution calling for Bassel Khartabil's release. In 2017 Bassel's wife received confirmation that Bassel had been executed shortly after she lost contact with him in 2015.
Criticism
All
current CC licenses (except the CC0 Public Domain Dedication tool)
require attribution, which can be inconvenient for works based on
multiple other works. Critics feared that Creative Commons could erode the copyright system over time
or allow "some of our most precious resources – the creativity of
individuals – to be simply tossed into the commons to be exploited by
whomever has spare time and a magic marker."
Critics also worried that the lack of rewards for content
producers will dissuade artists from publishing their work, and
questioned whether Creative Commons is the commons that it purports to be.
Creative Commons founder Lawrence Lessig countered that copyright
laws have not always offered the strong and seemingly indefinite
protection that today's law provides. Rather, the duration of copyright
used to be limited to much shorter terms of years, and some works never
gained protection because they did not follow the now-abandoned
compulsory format.
The maintainers of Debian, a GNU and Linux distribution known for its rigid adherence to a particular definition of software freedom, rejected the Creative Commons Attribution License prior to version 3 as incompatible with the Debian Free Software Guidelines (DFSG) due to the license's anti-DRM
provisions (which might, due to ambiguity, be covering more than DRM)
and its requirement that downstream users remove an author's credit upon
request from the author. Version 3.0 of the Creative Commons licenses addressed these concerns and except for the non commercial variants are considered to be compatible with the DFSG.
Kent Anderson, writing for The Scholarly Kitchen, a blog of the Society for Scholarly Publishing,
criticizes CC as being dependent on copyright and not really departing
from it, and as being more complex and complicating than the latter –
thus the public does not scrutinize CC, reflexively accepting it as one
would a software license
– while at the same time weakening the rights provided by copyright.
Anderson ends up concluding that this is the point, and that "Creative
Commons receives significant funding from large information companies
like Google, Nature Publishing Group, and RedHat",
and that Google money is especially linked to CC's history; for him, CC
is "an organization designed to promulgate the interests of technology
companies and Silicon Valley generally".
License proliferation and incompatibility
Mako Hill
asserted that Creative Commons fails to establish a "base level of
freedom" that all Creative Commons licenses must meet, and with which
all licensors and users must comply. "By failing to take any firm
ethical position and draw any line in the sand, CC is a missed
opportunity. … CC has replaced what could have been a call for a world
where 'essential rights are unreservable' with the relatively hollow
call for 'some rights reserved.'" He also argued that Creative Commons worsens license proliferation, by providing multiple licenses that are incompatible.
The Creative Commons website states, "Since each of the six CC
licenses functions differently, resources placed under different
licenses may not necessarily be combined with one another without
violating the license terms." Works licensed under incompatible licenses may not be recombined in a derivative work without obtaining permission from the copyright owner.
Richard Stallman of the FSF
stated in 2005 that he couldn’t support Creative Commons as an activity
because "it adopted some additional licenses which do not give everyone
that minimum freedom", that freedom being "the freedom to share,
noncommercially, any published work". Those licenses have since been retired by Creative Commons.
Creative Commons is only a service provider for standardized license
text, not a party in any agreement. Abusive users can brand the
copyrighted works of legitimate copyright holders with Creative Commons
licenses and re-upload these works to the internet. No central database
of Creative Commons works is controlling all licensed works and the
responsibility of the Creative Commons system rests entirely with those
using the licences.
This situation is, however, not specific to Creative Commons. All
copyright owners must individually defend their rights and no central
database of copyrighted works or existing license agreements exists. The
United States Copyright Office does keep a database of all works registered with it, but absence of registration does not imply absence of copyright.
Although Creative Commons offers multiple licenses for different
uses, some critics suggested that the licenses still do not address the
differences among the media or among the various concerns that different
authors have.
Lessig wrote that the point of Creative Commons is to provide a
middle ground between two extreme views of copyright protection – one
demanding that all rights be controlled, and the other arguing that none
should be controlled. Creative Commons provides a third option that
allows authors to pick and choose which rights they want to control and
which they want to grant to others. The multitude of licenses reflects
the multitude of rights that can be passed on to subsequent creators.
Criticism of the non-commercial license
Erik Möller
raised concerns about the use of Creative Commons' non-commercial
license. Works distributed under the Creative Commons Non-Commercial
license are not compatible with many open-content sites, including
Wikipedia, which explicitly allow and encourage some commercial uses.
Möller explained that "the people who are likely to be hurt by an -NC
license are not large corporations, but small publications like weblogs,
advertising-funded radio stations, or local newspapers."
Lessig responded that the current copyright regime also harms
compatibility and that authors can lessen this incompatibility by
choosing the least restrictive license.
Additionally, the non-commercial license is useful for preventing
someone else from capitalizing on an author's work when the author still
plans to do so in the future.
The non-commercial licenses have also been criticized for being too
vague about which uses count as "commercial" and "non-commercial".
Great Minds, a non-profit educational publisher that released works under an -NC license, sued FedEx
for violating the license because a school had used its services to
mass-produce photocopies of the work, thus commercially exploiting the
works. A U.S. judge dismissed the case in February 2017, ruling that
FedEx was an intermediary, and that the provision of the license "does
not limit a licensee's ability to use third parties in exercising the
rights granted [by the licensor]." Great Minds appealed the decision to the United States Court of Appeals for the Second Circuit later that year. The 2nd Circuit upheld the lower court's decision in March 2018, concluding that FedEx neither infringed copyrights nor violated the license. One of circuit judges Susan L. Carney argued in the court statement:
We
hold that, in view of the absence of any clear license language to the
contrary, licensees may use third‐party agents such as commercial
reproduction services in furtherance of their own permitted
noncommercial uses. Because FedEx acted as the mere agent of licensee
school districts when it reproduced Great Minds’ materials, and because
there is no dispute that the school districts themselves sought to use
Great Minds’ materials for permissible purposes, we conclude that
FedEx’s activities did not breach the license or violate Great Minds'
copyright.
Personality rights
In 2007, Virgin Mobile Australia
launched a bus stop advertising campaign which promoted its mobile
phone text messaging service using the work of amateur photographers who
uploaded their work to the photo-sharing site Flickr using a Creative Commons by Attribution
license. Users licensing their images this way freed their work for use
by any other entity, as long as the original creator was attributed
credit, without any other compensation being required. Virgin upheld
this single restriction by printing a URL, leading to the photographer's
Flickr page, on each of their ads. However, one picture depicted
15-year-old Alison Chang posing for a photo at her church's fund-raising
carwash, with the superimposed, mocking slogan "Dump Your Pen Friend".
Chang sued Virgin Mobile and Creative Commons. The photo was taken by
Chang's church youth counsellor, Justin Ho-Wee Wong, who uploaded the
image to Flickr under the Creative Commons license.
The case hinges on privacy, the
right of people not to have their likeness used in an ad without
permission. So, while Mr. Wong may have given away his rights as a
photographer, he did not, and could not, give away Alison's rights.
In the lawsuit, which Mr. Wong is also a party to, there is an argument
that Virgin did not honor all the terms of the nonrestrictive license.
On 27 November 2007, Chang filed for a voluntary dismissal of the
lawsuit against Creative Commons, focusing their lawsuit against Virgin
Mobile.
The case was thrown out of court due to lack of jurisdiction and
subsequently Virgin Mobile did not incur any damages towards the
plaintiff.
You’ve
read the headlines: quantum computers are going to cure disease by
discovering new pharmaceuticals! They’re going to pore through all the
world’s data and find solutions to problems like poverty and inequality!
Alternatively, they might not do any of that. We’re really not sure what a quantum computer will even look like, but boy are we excited.
It often feels like quantum computers are in their own quantum state —
they’re revolutionizing the world, but are still a distant pipe dream.
We’re really not sure what a quantum computer will even look like, but boy are we excited.
Now, though, the National Science Foundation has plans to pluck
quantum computers from the realm of the fantastic and drop them squarely
in its research labs. And it’s willing to pay an awful lot to do so.
In August, the federal agency announced the Software-Tailored Architecture for Quantum co-design (STAQ) project.
Physicists, engineers, computer scientists, and other researchers from
Duke and six other universities (including MIT and University of
California-Berkeley) will band together to embark on the five-year, $15
million mission.
The goal is to create the world’s first practical quantum computer —
one that goes beyond a proof-of-concept and actually outperforms the
best classical computers out there — from the ground up.
A little background: there are a few key differences
between a classical computer and a quantum computer. Where a classic
computer uses bits that are either in a 0 or 1 state, quantum bits, or
qubits, can also be both 1 and 0 at the same time. The quantum circuits that use these qubits
to transfer information or carry out a calculation are called quantum
logic gates; just as a classic circuit controls the flow of electricity
within a computer’s circuitry, these gates steer the individual qubits
via photons or trapped ions.
In order to develop quantum computers that are actually useful,
scientists need to figure out how to improve both hardware we use to
build the physical devices, and the software we run on them. That means
figuring out how to build systems with more qubits that are less
error-prone, and determining how to sort out the correct responses to
our queries when we get lots of noise back with them. It’s likely that
part of the answer is building automated tools that can optimize how
certain algorithms are mapped onto the specific hardware, ultimately
tackling both problems at once.
To better understand what this program might produce, Futurism caught up with Kenneth Brown,
the Duke University engineer in charge of STAQ. Here’s our
conversation, which has been lightly edited and condensed for clarity.
We’ve supplemented Brown’s answers with hyperlinks.
Futurism: A lot of what we hear about quantum computing is very abstract and theoretical — there’s lots of research that might
lead towards quantum computers, but doesn’t show any clear path on how
to get there. What will your team be able to do that others haven’t been
able to do in the past?
Kenneth Brown: I think it’s important to remember that quantum computers can be made out of a wide variety of things.
I usually make an analogy to classical computers. The first classical
computers were just gears, pretty much because that was the best
technology we had. And then there was this vacuum tube phase of
classical computers that was quite useful and good. And then the first
silicon transistor first appeared. And it’s important to remember that
when the silicon transistor first appeared, it couldn’t quite compete
with vacuum tubes. Sometimes I think people forget it was such an
amazing discovery.
Quantum computing is the same thing. There are lots of ways to
represent quantum information. Right now, the two technologies that have
demonstrated the most useful applications are superconducting qubits
and trapped ion qubits. They’re different and they have pluses and
minuses, but in our group, we’ve been collectively focused on these
trapped ion qubits.
With trapped ion qubits, what’s nice is that on a small scale of tens
of ions, all the qubits are directly connected. That’s very different
from a superconducting system of a solid-state system, in which you have
to talk to the qubits that are nearby. So I think we have very concrete
plans to get to 30, 32 qubits. That’s clear. We would like to extend
that to something closer to 64 or so, and that is going to require some
new research.
F: What makes this a “practical” computer compared to all of the other people working on quantum computers?
KB: I do think there are industrial efforts
pushing towards building exactly these practical devices. The thing
which really distinguishes us is being on the academic side. I think it
allows for more exploration, with the goal of making a device which
enables people to test wildly different ideas on how the architecture
should be and what applications should be on it, these sort of things.
Just to pick an example, the guys at IBM
have their quantum device. I actually collaborate with them through
other projects, and I think they’re pretty open. But right now, the way
you interact with it, you’re already at a level of abstraction [in that
people can ask things of the computer online but can’t change how it’s
programmed]. If you were thinking about totally optimizing this thing,
you can’t. They have a tradeoff: they have their computer totally open
for access on the web, but to make it stable like that, you have to turn
off some knobs. [The IBM computer, because it’s sequestered off and
intended for many researchers to use, can’t be customized to do
everything an individual might want it to.]
So our goal is to make a device reaching this practical scale where researchers can play with all the knobs.
F: How would quantum computing change things for the average person?
KB: I think in the long term, quantum computing and communication will change how we deal withencoded
information on the internet. In Google Chrome, in fact, you can already
change your cryptography to a possible post-quantum cryptography setup.
The second thing is I think people don’t think about all the ways
that molecular design impacts materials — from boring things like water
bottles to fancy things like specific new medicines. So what’s
interesting is if the quantum computer fulfills its promise to
efficiently and accurately calculate those molecular properties, that
could really change the materials and medicines we see in the future.
But on what you’re going to do on your home computer, the way I think
about it is most people use their computer to watch Netflix and
occasionally write a letter or email or whatever. Those are not places
where quantum computers really help you.
So it’s sort of funny — I don’t know what the user base would be. But
when computers were first built, people had the same impression. They
said that computers would just be for scientists doing lab work. And,
clearly, that’s no longer the case.
F: What sort of person will be able to use a quantum
computer? How do you train someone to use it, and what might the quantum
computing degree of the future look like?
KB: When I try to explain quantum computing to
someone, if they know the physics or chemistry of quantum mechanics,
then I can usually start there to explain how to do the computing side.
And the other side is also true: if people understand computing pretty
well, I can explain the extra modules that quantum computing gives you.
In the future, we probably need people trained from both of those
disciplines. We need people who have a physical sciences background who
we get up to speed on the computer science side and the opposite.
The specific thing we’re going to try to do is have this quantum
summer school, with the idea to bring in people from industry who are
maybe excellent microwave engineers or software engineers, and try to
give them enough tools so they can start to think about the extra rules
you have to think about with quantum.
F: What sorts of new research will you need to sortout before this thing can be built? What will that take?
KB: We have some ideas. In a classic computer you
work with voltage, but in quantum computing, I need to somehow carry
information from one place to another. Do messenger qubits that carry
information to other parts of the computer have to be the same type of
qubit that the rest of the computer is made of? We’re not sure yet.
A common way to think about scaling up the complexity of quantum computersis called the CCD architecture. The idea is to shuttle manageable chains of ions from point to point. That’s one possibility.
There’s been some theoretical work looking at whether you can have
photons interconnect between ion chains. The idea across all kinds of
supercomputers is to use photons as these messenger qubits. And by doing
that, you can basically have a bunch of small quantum computers wired
up by all these photons that collectively act like a larger computer.
But that’s farther out. I think getting that to work at the bandwidth
we need in the next five years would be pretty challenging. If it
happens, that would be great, but it’s probably farther out.
F: Along the way, how will you know that you’re making
tangible progress? Do you have benchmarks for knowing that you’re, say,
halfway there? How can you test to know for sure that it’s working?
KB: On the hardware side, we can increase the number
of qubits and get the gates [these are, if you recall, the things that
move ions or photons to transfer information] better and call it
tangible progress. We have a sense that we have to get, even though the
number moves, somewhere above fifty qubits to have a fighting chance.
[As of March, Google holds the record with a 72-qubit system]
At the same time, we’re going to take algorithms and applications
that we know, and we’re going to map them onto the hardware. We’re going
to try to optimize the algorithm as we map it in a way that makes the
overall application less vulnerableto noise.
Before we run these applications, we have a rule of thumb about how
often they should fail in tests and general use. But after this software
optimization that my team is working on, ideally it will fail much less
often. That helps us explore more in the algorithm space because it
gives you confidence that you can push quantum computerstowards
more complicated systems. I think it’s important to note that we have
the space to be very exploratory, to look at problems people aren’t
thinking about.
F: What’s the worst misconception about quantum computers that you run into? What do people always seem to get wrong about them?
KB: The one misconception is that it’s magic. Quantum computers aren’t magic; they don’t allow you to solve all problems.
Here’s the thing — in classical computing, we have the sense that
there are some problems that are easy and some problems which are really
hard, which means we can’t solve them in polynomial time [a computer science term used to denote whether a computer is able to complete a task quickly].
It turns out we spend a lot of our computing power trying to solve
the problems that we can’t solve in polynomial time, and we just have
approximations.
Quantum computers do allow you to solve some of the problems which
are intractable on a classic computer, but they don’t solve them all.
Usually, the thing which drives me crazy is when a quantum computer
article says they can solve all problems instantly because they do
infinite parallel calculations at once.
I’m really excited for when we have large scale quantum computers. With some problems — the famous example is the Traveling Salesman Problem — we know we can’t solve it for all
possible routes of salesmen, but we have to solve it anyway. The
classical computer does the best it can, and then when it blows it,
nobody’s upset. You’re like, ‘oh okay, well it’s going to get it wrong
some of the time.’
When we have large-scale quantum computers, we can test algorithms
like that more accurately. We’ll know we can solve the classical
problem, just occasionally the new computer gets bogged down.
I’m a big optimist. I guess that’s how you end up working this kind of field.
Open science is the movement to make scientific research, data
and dissemination accessible to all levels of an inquiring society,
amateur or professional. Open science is transparent and accessible
knowledge that is shared and developed through collaborative networks. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.
Open science began in the 17th century with the advent of the academic journal,
when the societal demand for access to scientific knowledge reached a
point where it became necessary for groups of scientists to share
resources with each other so that they could collectively do their work. In modern times there is debate about the extent to which scientific information should be shared.
The conflict is between the desire of scientists to have access to
shared resources versus the desire of individual entities to profit when
other entities partake of their resources. Additionally, the status of open access and resources that are available for its promotion are likely to differ from one field of academic inquiry to another.
Background
Science
is broadly understood as collecting, analyzing, publishing,
reanalyzing, critiquing, and reusing data. Proponents of open science
identify a number of barriers that impede or dissuade the broad
dissemination of scientific data.
These include financial paywalls
of for-profit research publishers, restrictions on usage applied by
publishers of data, poor formatting of data or use of proprietary
software that makes it difficult to re-purpose, and cultural reluctance
to publish data for fears of losing control of how the information is
used.
Open Science Taxonomy
According to the FOSTER taxonomy Open science can often include aspects of Open access, Open data and the open source movement whereby modern science requires software in order to process data and information. Open research computation also addresses the problem of reproducibility of scientific results. The FOSTER Open Science taxonomy is available in RDF/XML and high resolution image.
Types
The term
"open science" does not have any one fixed definition or
operationalization. On the one hand, it has been referred to as a
"puzzling phenomenon".
On the other hand, the term has been used to encapsulate a series of
principles that aim to foster scientific growth and its complementary
access to the public. Two influential sociologists, Benedikt Fecher and
Sascha Friesike, have created multiple "schools of thought" that
describe the different interpretations of the term.
According to Fecher and Friesike ‘Open Science’ is an umbrella
term for various assumptions about the development and dissemination of
knowledge. To show the term’s multitudinous perceptions, they
differentiate between five Open Science schools of thought:
Infrastructure School
The
infrastructure school is founded on the assumption that "efficient"
research depends on the availability of tools and applications.
Therefore, the "goal" of the school is to promote the creation of openly
available platforms, tools, and services for scientists. Hence, the
infrastructure school is concerned with the technical infrastructure
that promotes the development of emerging and developing research
practices through the use of the internet, including the use of software
and applications, in addition to conventional computing networks. In
that sense, the infrastructure school regards open science as a
technological challenge. The infrastructure school is tied closely with
the notion of "cyberscience", which describes the trend of applying
information and communication technologies to scientific research, which
has led to an amicable development of the infrastructure school.
Specific elements of this prosperity include increasing collaboration
and interaction between scientists, as well as the development of
"open-source science" practices. The sociologists discuss two central
trends in the Infrastructure school:
1. Distributed computing: This trend encapsulates practices that
outsource complex, process-heavy scientific computing to a network of
volunteer computers around the world. The examples that the
sociologists cite in their paper is that of the Open Science Grid, which
enables the development of large-scale projects that require
high-volume data management and processing, which is accomplished
through a distributed computer network. Moreover, the grid provides the
necessary tools that the scientists can use to facilitate this process.
2. Social and Collaboration Networks or Scientists: This trend
encapsulates the development of software that makes interaction with
other researchers and scientific collaborations much easier than
traditional, non-digital practices. Specifically, the trend is focused
on implementing newer Web 2.0 tools to facilitate research related
activities on the internet. De Roure and colleagues (2008) list a series of four key capabilities which they believe composes A Social Virtual Research Environment (SVRE):
The SVRE should primarily aid the management and sharing of
research objects. The authors define these to be a variety of digital
commodities that are used repeatedly by researchers.
Second, the SVRE should have inbuilt incentives for researchers to make their research objects available on the online platform.
Third, the SVRE should be "open" as well as "extensible", implying
that different types of digital artifacts composing the SVRE can be
easily integrated.
Fourth, the authors propose that the SVRE is more than a simple
storage tool for research information. Instead, the researchers propose
that the platform should be "actionable". That is, the platform should
be built in such a way that research objects can be used in the conduct
of research as opposed to simply being stored.
Measurement School
The
measurement school, in the view of the authors, deals with developing
alternative methods to determine scientific impact. This school
acknowledges that measurements of scientific impact are crucial to a
researcher's reputation, funding opportunities, and career development.
Hence, the authors argue, that any discourse about Open Science is
pivoted around developing a robust measure of scientific impact in the
digital age. The authors then discuss other research indicating support
for the measurement school. The three key currents of previous
literature discussed by the authors are:
The peer-review is described as being time-consuming.
The impact of an article, tied to the name of the authors of the
article, is related more to the circulation of the journal rather than
the overall quality of the article itself.
New publishing formats that are closely aligned with the philosophy
of Open Science are rarely found in the format of a journal that allows
for the assignment of the impact factor.
Hence, this school argues that there are faster impact measurement
technologies that can account for a range of publication types as well
as social media web coverage of a scientific contribution to arrive at a
complete evaluation of how impactful the science contribution was. The
gist of the argument for this school is that hidden uses like reading,
bookmarking, sharing, discussing and rating are traceable activities,
and these traces can and should be used to develop a newer measure of
scientific impact. The umbrella jargon for this new type of impact
measurements is called altmetrics, coined in a 2011 article by Priem et
al., (2011).
Markedly, the authors discuss evidence that altmetrics differ from
traditional webometrics which are slow and unstructured. Altmetrics are
proposed to rely upon a greater set of measures that account for tweets,
blogs, discussions, and bookmarks. The authors claim that the existing
literature has often proposed that altmetrics should also encapsulate
the scientific process, and measure the process of research and
collaboration to create an overall metric. However, the authors are
explicit in their assessment that few papers offer methodological
details as to how to accomplish this. The authors use this and the
general dearth of evidence to conclude that research in the area of
altmetrics is still in its infancy.
Public School
According
to the authors, the central concern of the school is to make science
accessible to a wider audience. The inherent assumption of this school,
as described by the authors, is that the newer communication
technologies such as Web 2.0 allow scientists to open up the research
process and also allow scientist to better prepare their "products of
research" for interested non-experts. Hence, the school is characterized
by two broad streams: one argues for the access of the research process
to the masses, whereas the other argues for increased access to the
scientific product to the public.
Accessibility to the Research Process: Communication technology
allows not only for the constant documentation of research but also
promotes the inclusion of many different external individuals in the
process itself. The authors cite citizen science- the
participation of non-scientists and amateurs in research. The authors
discuss instances in which gaming tools allow scientists to harness the
brain power of a volunteer workforce to run through several permutations
of protein-folded structures. This allows for scientists to eliminate
many more plausible protein structures while also "enriching" the
citizens about science. The authors also discuss a common criticism of
this approach: the amateur nature of the participants threatens to
pervade the scientific rigor of experimentation.
Comprehensibility of the Research Result: This stream of research
concerns itself with making research understandable for a wider
audience. The authors describe a host of authors that promote the use of
specific tools for scientific communication, such as microblogging
services, to direct users to relevant literature. The authors claim that
this school proposes that it is the obligation of every researcher to
make their research accessible to the public. The authors then proceed
to discuss if there is an emerging market for brokers and mediators of
knowledge that is otherwise too complicated for the public to grasp
effortlessly.
Democratic School
The
democratic school concerns itself with the concept of access to
knowledge. As opposed to focusing on the accessibility of research and
its understandability, advocates of this school focus on the access of
products of research to the public. The central concern of the school is
with the legal and other obstacles that hinder the access of research
publications and scientific data to the public. The authors argue that
proponents of this school assert that any research product should be
freely available. The authors argue that the underlying notion of this
school is that everyone has the same, equal right of access to
knowledge, especially in the instances of state-funded experiments and
data. The authors categorize two central currents that characterize
this school: Open Access and Open Data.
Open Data: The authors discuss existing attitudes in the field
that rebel against the notion that publishing journals should claim
copyright over experimental data, which prevents the re-use of data and
therefore lowers the overall efficiency of science in general. The claim
is that journals have no use of the experimental data and that allowing
other researchers to utilize this data will be fruitful. The authors
cite other literature streams that discovered that only a quarter of
researchers agree to share their data with other researchers because of
the effort required for compliance.
Open Access to Research Publication: According to this school, there
is a gap between the creation and sharing of knowledge. Proponents
argue, as the authors describe, that even scientific knowledge doubles
every 5 years, access to this knowledge remains limited. These
proponents consider access to knowledge as a necessity for human
development, especially in the economic sense.
Pragmatic School
The
pragmatic school considers Open Science as the possibility to make
knowledge creation and dissemination more efficient by increasing the
collaboration throughout the research process. Proponents argue that
science could be optimized by modularizing the process and opening up
the scientific value chain. ‘Open’ in this sense follows very much the
concept of open innovation.
Take for instance transfers the outside-in (including external
knowledge in the production process) and inside-out (spillovers from the
formerly closed production process) principles to science. Web 2.0 is considered a set of helpful tools that can foster collaboration (sometimes also referred to as Science 2.0). Further, citizen science
is seen as a form of collaboration that includes knowledge and
information from non-scientists. Fecher and Friesike describe data
sharing as an example of the pragmatic school as it enables researchers
to use other researchers’ data to pursue new research questions or to
conduct data-driven replications.
History
The widespread adoption of the institution of the scientific journal
marks the beginning of the modern concept of open science. Before this
time societies pressured scientists into secretive behaviors.
Before journals
Before the advent of scientific journals, scientists had little to gain and much to lose by publicizing scientific discoveries. Many scientists, including Galileo, Kepler, Isaac Newton, Christiaan Huygens, and Robert Hooke,
made claim to their discoveries by describing them in papers coded in
anagrams or cyphers and then distributing the coded text.
Their intent was to develop their discovery into something off which
they could profit, then reveal their discovery to prove ownership when
they were prepared to make a claim on it.
The system of not publicizing discoveries caused problems because
discoveries were not shared quickly and because it sometimes was
difficult for the discoverer to prove priority. Newton and Gottfried Leibniz both claimed priority in discovering calculus. Newton said that he wrote about calculus in the 1660s and 1670s, but did not publish until 1693. Leibniz published "Nova Methodus pro Maximis et Minimis",
a treatise on calculus]] in 1684. Debates over priority are inherent in
systems where science is not published openly, and this was problematic
for scientists who wanted to benefit from priority.
These cases are representative of a system of aristocratic
patronage in which scientists received funding to develop either
immediately useful things or to entertain.
In this sense, funding of science gave prestige to the patron in the
same way that funding of artists, writers, architects, and philosophers
did.
Because of this, scientists were under pressure to satisfy the desires
of their patrons, and discouraged from being open with research which
would bring prestige to persons other than their patrons.
Emergence of academies and journals
Eventually the individual patronage system ceased to provide the scientific output which society began to demand. Single patrons could not sufficiently fund scientists, who had unstable careers and needed consistent funding.
The development which changed this was a trend to pool research by
multiple scientists into an academy funded by multiple patrons. In 1660 England established the Royal Society and in 1666 the French established the French Academy of Sciences.
Between the 1660s and 1793, governments gave official recognition to 70
other scientific organizations modeled after those two academies. In 1665, Henry Oldenburg became the editor of Philosophical Transactions of the Royal Society, the first academic journal devoted to science, and the foundation for the growth of scientific publishing. By 1699 there were 30 scientific journals; by 1790 there were 1052. Since then publishing has expanded at even greater rates.
Popular Science Writing
The
first popular science periodical of its kind was published in 1872,
under a suggestive name that is still a modern portal for the offering
science journalism: Popular Science. The magazine claims to have
documented the invention of the telephone, the phonograph, the electric
light and the onset of automobile technology. The magazine goes so far
as to claim that the "history of Popular Science is a true reflection of
humankind's progress over the past 129+ years".
Discussions of popular science writing most often contend their
arguments around some type of "Science Boom". A recent historiographic
account of popular science traces mentions of the term"science boom" to
Daniel Greenberg's Science and Government Reports in 1979 which posited
that "Scientific magazines are bursting out all over. Similarly, this
account discusses the publication Time, and its cover story of Carl
Sagan in 1980 as propagating the claim that popular science has "turned
into enthusiasm". Crucially, this secondary accounts asks the important question as to
what was considered as popular "science" to begin with. The paper claims
that any account of how popular science writing bridged the gap between
the informed masses and the expert scientists must first consider who
was considered a scientist to begin with.
Collaboration among academies
In
modern times many academies have pressured researchers at publicly
funded universities and research institutions to engage in a mix of
sharing research and making some technological developments proprietary.
Some research products have the potential to generate commercial
revenue, and in hope of capitalizing on these products, many research
institutions withhold information and technology which otherwise would
lead to overall scientific advancement if other research institutions
had access to these resources.
It is difficult to predict the potential payouts of technology or to
assess the costs of withholding it, but there is general agreement that
the benefit to any single institution of holding technology is not as
great as the cost of withholding it from all other research
institutions.
Coining of phrase "OpenScience"
The exact phrase "Open Science" was coined by Steve Mann
in 1998 at which time he also registered the domain name
openscience.com and openscience.org which he sold to egruyter.com in
2011.
Politics
In
many countries, governments fund some science research. Scientists often
publish the results of their research by writing articles and donating
them to be published in scholarly journals, which frequently are
commercial. Public entities such as universities and libraries subscribe
to these journals. Michael Eisen, a founder of the Public Library of Science,
has described this system by saying that "taxpayers who already paid
for the research would have to pay again to read the results."
In December 2011, some United States legislators introduced a bill called the Research Works Act,
which would prohibit federal agencies from issuing grants with any
provision requiring that articles reporting on taxpayer-funded research
be published for free to the public online.
Darrell Issa, a co-sponsor of the bill, explained the bill by saying
that "Publicly funded research is and must continue to be absolutely
available to the public. We must also protect the value added to
publicly funded research by the private sector and ensure that there is
still an active commercial and non-profit research community." One response to this bill was protests from various researchers; among them was a boycott of commercial publisher Elsevier called The Cost of Knowledge.
The Dutch Presidency of the Council of the European Union called out for action in April 2016 to migrate European Commission funded research to Open Science. European Commissioner Carlos Moedas introduced the Open Science Cloud at the Open Science Conference in Amsterdam on
April 4–5. During this meeting also The Amsterdam Call for Action on Open Science was presented, a living document outlining concrete actions for the European Community to move to Open Science.
Reaction
Arguments against
The open sharing of research data is not widely practiced
Arguments
against open science tend to advance several concerns. These include
the potential for some scholars to capitalize on data other scholars
have worked hard to collect, without collecting data themselves, the
potential for less qualified individuals to misuse open data and
arguments that novel data are more critical than reproducing or
replicating older findings.
Too much unsorted information overwhelms scientists
Some scientists find inspiration in their own thoughts by restricting the amount of information they get from others. Alexander Grothendieck
has been cited as a scientist who wanted to learn with restricted
influence when he said that he wanted to "reach out in (his) own way to
the things (he) wished to learn, rather than relying on the notions of
consensus."
Potential misuse
In 2011, Dutch researchers announced their intention to publish a research paper in the journal Science describing the creation of a strain of H5N1 influenza which can be easily passed between ferrets, the mammals which most closely mimic the human response to the flu. The announcement triggered a controversy in both political and scientific circles about the ethical implications of publishing scientific data which could be used to create biological weapons. These events are examples of how science data could potentially be misused. Scientists have collaboratively agreed to limit their own fields of inquiry on occasions such as the Asilomar conference on recombinant DNA in 1975, and a proposed 2015 worldwide moratorium on a human-genome-editing technique.
The public will misunderstand science data
In 2009 NASA launched the Kepler
spacecraft and promised that they would release collected data in June
2010. Later they decided to postpone release so that their scientists
could look at it first. Their rationale was that non-scientists might
unintentionally misinterpret the data, and NASA scientists thought it
would be preferable for them to be familiar with the data in advance so
that they could report on it with their level of accuracy.
Increasing the scale of science will make verification of any discovery more difficult
When more people report data it will take longer for anyone to
consider all data, and perhaps more data of lower quality, before
drawing any conclusion.
Low-quality science
Post-publication peer review, a staple of open science, has been
criticized as promoting the production of lower quality papers that are
extremely voluminous.
Specifically, critics assert that as quality is not guaranteed by
preprint servers, the veracity of papers will be difficult to assess by
individual readers. This will lead to rippling effects of false science,
akin to the recent epidemic of false news, propagated with ease on
social media websites.
Common solutions to this problem have been cited as adaptations of a
new format in which everything is allowed to be published but a
subsequent filter-curator model is imposed to ensure some basic quality
of standards are met by all publications.
Arguments in favor
A
number of scholars across disciplines have advanced various arguments
in favor of open science. These generally focus on the perceived value
of open science in improving the transparency and validity of research
as well as in regards to public ownership of science, particularly that
which is publicly funded. For example, in January 2014 J. Christopher
Bare published a comprehensive "Guide to Open Science".
Likewise in January, 2017, a group of scholars known for advocating
open science published a "manifesto" for open science in the journal Nature. In November 2017, a group of early career researchers published their "manifesto" in the journal Genome Biology,
stating that it is their task to change scientific research into open
scientific research and commit to Open Science principles.
Open access publication of research reports and data allows for rigorous peer-review
An article published by a team of NASA astrobiologists in 2010 in Science reported a bacterium known as GFAJ-1 that could purportedly metabolize arsenic (unlike any previously known species of lifeform). This finding, along with NASA's claim that the paper "will impact the search for evidence of extraterrestrial life", met with criticism within the scientific community. Much of the scientific commentary and critique around this issue took place in public forums, most notably on Twitter, where hundreds of scientists and non-scientists created a hashtag community around the hashtag #arseniclife.
University of British Columbia astrobiologist Rosie Redfield, one of
the most vocal critics of the NASA team's research, also submitted a
draft of a research report of a study that she and colleagues conducted
which contradicted the NASA team's findings; the draft report appeared
in arXiv,
an open-research repository, and Redfield called in her lab's research
blog for peer review both of their research and of the NASA team's
original paper.
Researcher Jeff Rouder defined Open Science as "endeavoring to preserve
the rights of others to reach independent conclusions about your data
and work".
Science is publicly funded so all results of the research should be publicly available
Public funding of research has long been cited as one of the primary reasons for providing Open Access to research articles.
Since there is significant value in other parts of the research such as
code, data, protocols, and research proposals a similar argument is
made that since these are publicly funded, they should be publicly
available under a creative commons licence.
Open Science will make science more reproducible and transparent
Increasingly the reproducibility of science is being questioned and the term "reproducibility crisis" has been coined.
For example, psychologist Stuart Vyse notes that "(r)ecent research
aimed at previously published psychology studies has
demonstrated--shockingly--that a large number of classic phenomena
cannot be reproduced, and the popularity of p-hacking is thought to be one of the culprits." Open Science approaches are proposed as one way to help increase the reproducibility of work as well as to help mitigate against manipulation of data.
Open Science has more impact
There are several components to impact in research, many of which are hotly debated.
However, under traditional scientific metrics parts Open science such
as Open Access and Open Data have proved to outperform traditional
versions.
Open Science will help answer uniquely complex questions
Recent arguments in favor of Open Science have maintained that
Open Science is a necessary tool to begin answering immensely complex
questions, such as the neural basis of consciousness.
The typical argument propagates the fact that these type of
investigations are too complex to be carried out by any one individual,
and therefore, they must rely on a network of open scientists to be
accomplished. By default, the nature of these investigations also makes
this "open science" as "big science".
Organizations and projects
Big scientific projects are more likely to practice open science than small projects.
Different projects conduct, advocate, develop tools for, or fund open
science, and many organizations run multiple projects. For example, the
Allen Institute for Brain Science conducts numerous open science projects while the Center for Open Science
has projects to conduct, advocate, and create tools for open science.
Open science is stimulating the emergence of sub-branches such as open synthetic biology and open therapeutics
.
Organizations have extremely diverse sizes and structures. The Open Knowledge Foundation
(OKF) is a global organization sharing large data catalogs, running
face to face conferences, and supporting open source software projects.
In contrast, Blue Obelisk is an informal group of chemists and associated cheminformatics projects. The tableau of organizations is dynamic with some organizations becoming defunct, e.g., Science Commons, and new organizations trying to grow, e.g., the Self-Journal of Science. Common organizing forces include the knowledge domain, type of service provided, and even geography, e.g., OCSDNet's concentration on the developing world.
Conduct
Many
open science projects focus on gathering and coordinating encyclopedic
collections of large amounts of organized data. The Allen Brain Atlas maps gene expression in human and mouse brains; the Encyclopedia of Life documents all the terrestrial species; the Galaxy Zoo classifies galaxies; the International HapMap Project maps the haplotypes of the human genome; the Monarch Initiative makes available integrated public model organism and clinical data; and the Sloan Digital Sky Survey
which regularizes and publishes data sets from many sources. All these
projects accrete information provided by many different researchers
with different standards of curation and contribution.
Other projects are organized around completion of projects that require extensive collaboration. For example, OpenWorm seeks to make a cellular level simulation of a roundworm, a multidisciplinary project. The Polymath Project
seeks to solve difficult mathematical problems by enabling faster
communications within the discipline of mathematics. The Collaborative
Replications and Education project recruits undergraduate students as citizen scientists by offering funding. Each project defines its needs for contributors and collaboration.
Other advocates concentrate on educating scientists about
appropriate open science software tools. Education is available as
training seminars, e.g., the Software Carpentry project; as domain specific training materials, e.g., the Data Carpentry project; and as materials for teaching graduate classes, e.g., the Open Science Training Initiative. Many organizations also provide education in the general principles of open science.
Within scholarly societies there are also sections and interest groups that promote open science practices. The Ecological Society of America has an Open Science Section . Similarly, the Society for American Archaeology has an Open Science Interest Group.
Publishing
Replacing
the current scientific publishing model is one goal of open science.
High costs to access literature gave rise to protests such as The Cost of Knowledge and to sharing papers without publisher consent, e.g., Sci-hub and ICanHazPDF. New organizations are experimenting with the open access model: the Public Library of Science, or PLOS, is creating a library of open access journals and scientific literature; F1000Research provides open publishing and open peer review for the life-sciences; figshare archives and shares images, readings, and other data; and arXiv provide electronic preprints across many fields; and many individual journals. Other publishing experiments include delayed and hybrid models.
Software
A variety of computer resources support open science. These include software like the Open Science Framework from the Center for Open Science to manage project information, data archiving and team coordination; distributed computing services like Ibercivis to utilize unused CPU time for computationally intensive tasks; and services like Experiment.com to provide crowdsourced funding for research projects.
Blockchain platforms for open science have been proposed. The first such platform is the Open Science Organization,
which aims to solve urgent problems with fragmentation of the
scientific ecosystem and difficulties of producing validated, quality
science. Among the initiatives of Open Science Organization include the
Interplanetary Idea System (IPIS), Researcher Index (RR-index), Unique
Researcher Identity (URI), and Research Network. The Interplanetary Idea
System is a blockchain based system that tracks the evolution of
scientific ideas over time. It serves to quantify ideas based on
uniqueness and importance, thus allowing the scientific community to
identify pain points with current scientific topics and preventing
unnecessary re-invention of previously conducted science. The Researcher
Index aims to establish a data-driven statistical metric for
quantifying researcher impact. The Unique Researcher Identity is a
blockchain technology based solution for creating a single unifying
identity for each researcher, which is connected to the researcher's
profile, research activities, and publications. The Research Network is a
social networking platform for researchers.
Preprint Servers
Preprint Servers come in many varieties, but the standard traits
across them are stable: they seek to create a quick, free mode of
communicating scientific knowledge to the public. Preprint servers act
as a venue to quickly disseminate research and vary on their policies
concerning when articles may be submitted relative to journal acceptance.
Also typical of preprint servers is their lack of a peer-review process
- typically, preprint servers have some type of quality check in place
to ensure a minimum standard of publication, but this mechanism is not
the same as a peer-review mechanism. Some preprint servers have
explicitly partnered with the broader open science movement. Preprint servers can offer service similar to those of journals, and Google Scholar indexes many preprint servers and collects information about citations to preprints. The case for preprint servers is often made based on the slow pace of conventional publication formats.
The motivation to start Socarxiv, an open-access preprint server for
social science research, is the claim that valuable research being
published in traditional venues often times takes several months to
years to get published, which slows down the process of science
significantly. Another argument made in favor of preprint servers like
Socarxiv is the quality and quickness of feedback offered to scientists
on their pre-published work.
The founders of Socarxiv claim that their platform allows researchers
to gain easy feedback from their colleagues on the platform, thereby
allowing scientists to develop their work into the highest possible
quality before formal publication and circulation. The founders of
Socarxiv further claim that their platform affords the authors the
greatest level of flexibility in updating and editing their work to
ensure that the latest version is available for rapid dissemination. The
founders claim that this is not traditionally the case with formal
journals, which instate formal procedures to make updates to published
articles.
Perhaps the strongest advantage of some preprint servers is their
seamless compatibility with Open Science software such as the Open
Science Framework. The founders of SocArXiv claim that their preprint
server connects all aspects of the research life cycle in OSF with the
article being published on the preprint server. According to the
founders, this allows for greater transparency and minimal work on the
authors' part.
One criticism of pre-print servers is their potential to foster a
culture of plagiarism. For example, the popular physics preprint server
ArXiv had to withdraw 22 papers whence it came to light that they were
plagiarized . In June 2002, a high-energy physicist in Japan was
contacted by a man called Ramy Naboulsi, a non-institutionally
affiliated mathematical physicist. Naboulsi requested Watanabe to upload
his papers on ArXiv as he was not able to do so, because of his lack of
an institutional affiliation. Later, the papers were realized to have
been copied from the proceedings of a physics conference.
Preprint servers are increasingly developing measures to circumvent
this plagiarism problem. In developing nations like India and China,
explicit measures are being taken to combat it.
These measures usually involve creating some type of central repository
for all available pre-prints, allowing the use of traditional
plagiarism detecting algorithms to detect the fraud. Nonetheless, this is a pressing issue in the discussion of pre-print servers, and consequently for Open Science.