Search This Blog

Wednesday, May 2, 2018

Cloud computing

From Wikipedia, the free encyclopedia

Cloud computing metaphor: the group of networked elements providing services need not be individually addressed or managed by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud.

Cloud computing is an information technology (IT) paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.

Third-party clouds enable organizations to focus on their core businesses instead of expending resources on computer infrastructure and maintenance.[1] Advocates note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand.[1][2][3] Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models.[4]

Since the launch of Amazon EC2 in 2006, the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing has led to growth in cloud computing.[5][6][7]

History

While the term "cloud computing" was popularized with Amazon.com releasing its Elastic Compute Cloud product in 2006,[8] references to the phrase "cloud computing" appeared as early as 1996, with the first known mention in a Compaq internal document.[9]

The cloud symbol was used to represent networks of computing equipment in the original ARPANET by as early as 1977,[10] and the CSNET by 1981[11] — both predecessors to the Internet itself. The word cloud was used as a metaphor for the Internet and a standardized cloud-like shape was used to denote a network on telephony schematics. With this simplification, the implication is that the specifics of how the end points of a network are connected are not relevant for the purposes of understanding the diagram.[citation needed]

The term cloud was used to refer to platforms for distributed computing as early as 1993, when Apple spin-off General Magic and AT&T used it in describing their (paired) Telescript and PersonaLink technologies.[12] In Wired's April 1994 feature "Bill and Andy's Excellent Adventure II", Andy Hertzfeld commented on Telescript, General Magic's distributed programming language:
"The beauty of Telescript ... is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create sort of a virtual service. No one had conceived that before. The example Jim White [the designer of Telescript, X.400 and ASN.1] uses now is a date-arranging service where a software agent goes to the flower store and orders flowers and then goes to the ticket shop and gets the tickets for the show, and everything is communicated to both parties."[13]

Early history

During the 1960s, the initial concepts of time-sharing became popularized via RJE (Remote Job Entry);[14] this terminology was mostly associated with large vendors such as IBM and DEC. Full-time-sharing solutions were available by the early 1970s on such platforms as Multics (on GE hardware), Cambridge CTSS, and the earliest UNIX ports (on DEC hardware). Yet, the "data center" model where users submitted jobs to operators to run on IBM mainframes was overwhelmingly predominant.

In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively.[citation needed] They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extended this boundary to cover all servers as well as the network infrastructure.[15] As computers became more diffused, scientists and technologists explored ways to make large-scale computing power available to more users through time-sharing.[citation needed] They experimented with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and increase efficiency for end users.[16]

2000s

Since 2000, cloud computing has come into existence.

In August 2006, Amazon introduced its Elastic Compute Cloud.[8]

In April 2008, Google released Google App Engine in beta.[17]

In early 2008, NASA's OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.[18]

By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them"[19] and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."[20]

2010s

In February 2010, Microsoft released Microsoft Azure, which was announced in October 2008.[21]

In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offering cloud-computing services running on standard hardware. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform. As an open source offering and along with other open-source solutions such as CloudStack, Ganeti and OpenNebula, it has attracted attention by several key communities. Several studies aim at comparing these open sources offerings based on a set of criteria.[22][23][24][25][26][27][28]

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet.[29] Among the various components of the Smarter Computing foundation, cloud computing is a critical part. On June 7, 2012, Oracle announced the Oracle Cloud.[30] This cloud offering is poised to be the first to provide users with access to an integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and Infrastructure (IaaS) layers.[31][32][33]

In May 2012, Google Compute Engine was released in preview, before being rolled out into General Availability in December 2013.[34]

Similar concepts

The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and helps the users focus on their core business instead of being impeded by IT obstacles.[35] The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.[35]

Users routinely face difficult business problems. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems into services that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way.

Cloud computing also leverages concepts from utility computing to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery. Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.[35]

Cloud computing shares characteristics with:
  • Client–server modelClient–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).[36]
  • Computer bureau—A service bureau providing computer services, particularly from the 1960s to 1980s.
  • Grid computing—"A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
  • Fog computing—Distributed computing paradigm that provides data, compute, storage and application services closer to client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client side (e.g. mobile devices), instead of sending data to a remote location for processing.
  • Mainframe computer—Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.
  • Utility computing—The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[37][38]
  • Peer-to-peer—A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model).
  • Green computing
  • Cloud sandbox—A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.

Characteristics

Cloud computing exhibits the following key characteristics:
  • Agility for organizations may be improved, as cloud computing may increase users' flexibility with re-provisioning, adding, or expanding technological infrastructure resources.
  • Cost reductions are claimed by cloud providers. A public-cloud delivery model converts capital expenditures (e.g., buying servers) to operational expenditure.[39] This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and need not be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is "fine-grained", with usage-based billing options. As well, less in-house IT skills are required for implementation of projects that use cloud computing.[40] The e-FISCAL project's state-of-the-art repository[41] contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
  • Device and location independence[42] enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect to it from anywhere.[40]
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places (e.g., different work locations, while travelling, etc.).
  • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • peak-load capacity increases (users need not engineer and pay for the resources and equipment to meet their highest possible load-levels)
    • utilisation and efficiency improvements for systems that are often only 10–20% utilised.[43][44]
  • Performance is monitored by IT experts from the service provider, and consistent and loosely coupled architectures are constructed using web services as the system interface.[40][45][46]
  • Resource pooling is the provider’s computing resources are commingle to serve multiple consumers using a multi-tenant model with different physical and virtual resources dynamically assigned and reassigned according to user demand. There is a sense of location independence in that the consumer generally have no control or knowledge over the exact location of the provided resource.[1]
  • Productivity may be increased when multiple users can work on the same data simultaneously, rather than waiting for it to be saved and emailed. Time may be saved as information does not need to be re-entered when fields are matched, nor do users need to install application software upgrades to their computer.[47]
  • Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.[48]
  • Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis in near real-time[49][50] (Note, the VM startup time varies by VM type, location, OS and cloud providers[49]), without users having to engineer for peak loads.[51][52][53] This gives the ability to scale up when the usage need increases or down if resources are not being used.[54]
  • Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because service providers are able to devote resources to solving security issues that many customers cannot afford to tackle or which they lack the technical skills to address.[55] However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
The National Institute of Standards and Technology's definition of cloud computing identifies "five essential characteristics":
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

Broad network access.
Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource pooling.
The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

Rapid elasticity.
Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.

Measured service.
Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

— National Institute of Standards and Technology[56]

Service models


Cloud computing service models arranged as layers in a stack

Though service-oriented architecture advocates "everything as a service" (with the acronyms EaaS or XaaS,[57] or simply aas), cloud-computing providers offer their "services" according to different models, of which the three standard models per NIST are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).[56] These models offer increasing abstraction; they are thus often portrayed as a layers in a stack: infrastructure-, platform- and software-as-a-service, but these need not be related. For example, one can provide SaaS implemented on physical machines (bare metal), without using underlying PaaS or IaaS layers, and conversely one can run a program on IaaS and access it directly, without wrapping it as SaaS.

Infrastructure as a service (IaaS)

"Infrastructure as a service" (IaaS) refers to online services that provide high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc. A hypervisor, such as Xen, Oracle VirtualBox, Oracle VM, KVM, VMware ESX/ESXi, or Hyper-V, LXD, runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. Containerisation offers higher performance than virtualization, because there is no hypervisor overhead. Also, container capacity auto-scales dynamically with computing load, which eliminates the problem of over-provisioning and enables usage-based billing.[58] IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[59]

The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."[56]

IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.[citation needed]

Platform as a service (PaaS)

The NIST's definition of cloud computing defines Platform as a Service as:[56]
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming-language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers like Microsoft Azure, Oracle Cloud Platform and Google App Engine, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually. The latter has also been proposed by an architecture aiming to facilitate real-time in cloud environments.[60][need quotation to verify] Even more specific application types can be provided via PaaS, such as media encoding as provided by services like bitcodin.com[61] or media.io.[62]

Some integration and data management providers have also embraced specialized applications of PaaS as delivery models for data solutions. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows.[63] Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware.[64] dPaaS delivers integration—and data-management—products as a fully managed service.[65] Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of data solutions by building tailored data applications for the customer. dPaaS users retain transparency and control over data through data-visualization tools.[66] Platform as a Service (PaaS) consumers do not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but have control over the deployed applications and possibly configuration settings for the application-hosting environment.

A recent specialized PaaS is the Blockchain as a Service (BaaS), that some vendors such as IBM Bluemix and Oracle Cloud Platform have already included in their PaaS offering.[67][68]

Software as a service (SaaS)

The NIST's definition of cloud computing defines Software as a Service as:[56]
The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee.[69] In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand.[70] Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,[71] so prices become scalable and adjustable if users are added or removed at any point.[72] Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result,[citation needed] there could be unauthorized access to the data.[citation needed]

Mobile "backend" as a service (MBaaS)

In the mobile "backend" as a service (m) model, also known as backend as a service (BaaS), web app and mobile app developers are provided with a way to link their applications to cloud storage and cloud computing services with application programming interfaces (APIs) exposed to their applications and custom software development kits (SDKs). Services include user management, push notifications, integration with social networking services[73] and more. This is a relatively recent model in cloud computing,[74] with most BaaS startups dating from 2011 or later[75][76][77] but trends indicate that these services are gaining significant mainstream traction with enterprise consumers.[78]

Serverless computing

Serverless computing is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour.[79] Despite the name, it does not actually involve running code without servers.[79] Serverless computing is so named because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run on.

Deployment models


Cloud computing types

Private cloud

Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party, and hosted either internally or externally.[56] Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers[80] are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management,[81] essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".[82][83]

Public cloud

A cloud is called a "public cloud" when the services are rendered over a network that is open for public use. Public cloud services may be free.[84] Technically there may be little or no difference between public and private cloud architecture, however, security consideration may be substantially different for services (applications, storage, and other resources) that are made available by a service provider for a public audience and when communication is effected over a non-trusted network. Generally, public cloud service providers like Amazon Web Services (AWS), Oracle, Microsoft and Google own and operate the infrastructure at their data center and access is generally via the Internet. AWS, Oracle, Microsoft, and Google also offer direct connect services called "AWS Direct Connect", "Oracle FastConnect", "Azure ExpressRoute", and "Cloud Interconnect" respectively, such connections require customers to purchase or lease a private connection to a peering point offered by the cloud provider.[40][85]

Hybrid cloud

Hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources.[56] Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers.[86] A hybrid cloud service crosses isolation and provider boundaries so that it can't be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.

Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service.[87] This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.[88]

Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud.[89] This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.[56] Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed.[90] Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.[91] The specialized model of hybrid cloud, which is built atop heterogeneous hardware, is called "Cross-platform Hybrid Cloud". A cross-platform hybrid cloud is usually powered by different CPU architectures, for example, x86-64 and ARM, underneath. Users can transparently deploy and scale applications without knowledge of the cloud's hardware diversity.[92] This kind of cloud emerges from the raise of ARM-based system-on-chip for server-class computing.

Others

Community cloud

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party, and either hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.[56]

Distributed cloud

A cloud computing platform can be assembled from a distributed set of machines in different locations, connected to a single network or hub service. It is possible to distinguish between two types of distributed clouds: public-resource computing and volunteer cloud.
  • Public-resource computing—This type of distributed cloud results from an expansive definition of cloud computing, because they are more akin to distributed computing than cloud computing. Nonetheless, it is considered a sub-class of cloud computing, and some examples include distributed computing platforms such as BOINC and Folding@Home.
  • Volunteer cloud—Volunteer cloud computing is characterized as the intersection of public-resource computing and cloud computing, where a cloud computing infrastructure is built using volunteered resources. Many challenges arise from this type of infrastructure, because of the volatility of the resources used to built it and the dynamic environment it operates in. It can also be called peer-to-peer clouds, or ad-hoc clouds. An interesting effort in such direction is Cloud@Home, it aims to implement a cloud computing infrastructure using volunteered resources providing a business-model to incentivize contributions through financial restitution.[93]

Multicloud

Multicloud is the use of multiple cloud computing services in a single heterogeneous architecture to reduce reliance on single vendors, increase flexibility through choice, mitigate against disasters, etc. It differs from hybrid cloud in that it refers to multiple cloud services, rather than multiple deployment modes (public, private, legacy).[94][95][96]

Big Data cloud

The issues of transferring large amounts of data to the cloud as well as data security once the data is in the cloud initially hampered adoption of cloud for big data, but now that much data originates in the cloud and with the advent of bare-metal servers, the cloud has become[97] a solution for use cases including business analytics and geospatial analysis.[98] Solutions range from Hadoop hosting in the cloud to end-to-end analytics.[99]

HPC cloud

HPC cloud refers to the use of cloud computing services and infrastructure to execute high-performance computing (HPC) applications [100]. These applications consume considerable amount of computing power and memory and are traditionally executed on clusters of computers. Various vendors offer servers that can support the execution of these applications [101] [102] [103] [104]. In HPC cloud, the deployment model allows all HPC resources to be inside the cloud provider infrastructure or different portions of HPC resources to be shared between cloud provider and client on-premise infrastructure. The adoption of cloud to run HPC applications started mostly for applications composed of independent tasks with no inter-process communication. As cloud providers began to offer high-speed network technologies such as InfiniBand, multiprocessing tightly coupled applications started to benefit from cloud as well.

Architecture


Cloud computing sample architecture

Cloud architecture,[105] the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.

Cloud engineering

Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level concerns of commercialization, standardization, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.

Security and privacy

Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or even delete information.[106] Many cloud providers can share information with third parties if necessary for purposes of law and order even without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end users' choices for how data is stored.[106] Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access.[107][106]

According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API's, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users there may be a possibility that information belonging to different customers resides on same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles' heels in the cloud infrastructure that are making big holes for the bad guys to get into". Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called "hyperjacking". Some examples of this include the Dropbox security breach, and iCloud 2014 leak.[108] Dropbox had been breached in October 2014, having over 7 million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).[108]

There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.[109] Physical control of the computer equipment (private cloud) is more secure than having the equipment off site and under someone else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services.[110] Some small businesses that don't have expertise in IT security could find that it's more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes don't read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud computing is becoming popular and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Now). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.[111]

Limitations and disadvantages

According to Bruce Schneier, "The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and — like any outsourced task — you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it's a feature, not a bug." He also suggests that "the cloud provider might not meet your legal needs" and that businesses need to weigh the benefits of cloud computing against the risks.[112] In cloud computing, the control of the back end infrastructure is limited to the cloud vendor only. Cloud providers often decide on the management policies, which moderates what the cloud users are able to do with their deployment.[113] Cloud users are also limited to the control and management of their applications, data and services.[114] This includes data caps, which are placed on cloud users by the cloud vendor allocating certain amount of bandwidth for each customer and are often shared among other cloud users.[114]

Privacy and confidentiality are big concerns in some activities. For instance, sworn translators working under the stipulations of an NDA, might face problems regarding sensitive data that are not encrypted.[115]

Cloud computing is beneficial to many enterprises; it lowers costs and allows them to focus on competence instead of on matters of IT and infrastructure. Nevertheless, cloud computing has proven to have some limitations and disadvantages, especially for smaller business operations, particularly regarding security and downtime. Technical outages are inevitable and occur sometimes when cloud service providers become overwhelmed in the process of serving their clients. This may result to temporary business suspension. Since this technology's systems rely on the internet, an individual cannot be able to access their applications, server or data from the cloud during an outage.

Emerging trends

Cloud computing is still a subject of research.[116] A driving factor in the evolution of cloud computing has been chief technology officers seeking to minimize risk of internal outages and mitigate the complexity of housing network and computing hardware in-house.[117] Major cloud technology companies invest billions of dollars per year in cloud Research and Development. For example, in 2011 Microsoft committed 90 percent of its $9.6 billion R&D budget to its cloud.[118] Research by investment bank Centaur Partners in late 2015 forecasted that SaaS revenue would grow from $13.5 billion in 2011 to $32.8 billion in 2016.[119]

Digital forensics in the cloud

The issue of carrying out investigations where the cloud storage devices cannot be physically accessed has generated a number of changes to the way that digital evidence is located and collected[120]. New process models have been developed to formalize collection.[121]

In some scenarios existing digital forensics tools can be employed to access cloud storage as networked drives (although this is a slow process generating a large amount of internet traffic).[2]

An alternative approach is to deploy a tool that processes in the cloud itself [122]

For organizations using Office 365 with an 'E5' subscription there is the option to use Microsoft's built-in ediscovery resources, although these do not provide all the functionality that is typically required for a forensic process.[123]

Nuclear fission product

From Wikipedia, the free encyclopedia

Nuclear fission products are the atomic fragments left after a large atomic nucleus undergoes nuclear fission. Typically, a large nucleus like that of uranium fissions by splitting into two smaller nuclei, along with a few neutrons, the release of heat energy (kinetic energy of the nuclei), and gamma rays. The two smaller nuclei are the fission products. (See also Fission products (by element)).

About 0.2% to 0.4% of fissions are ternary fissions, producing a third light nucleus such as helium-4 (90%) or tritium (7%).

The fission products themselves are usually unstable and therefore radioactive; due to being relatively neutron-rich for their atomic number, many of them quickly undergo beta decay. This releases additional energy in the form of beta particles, antineutrinos, and gamma rays. Thus, fission events normally result in beta and gamma radiation, even though this radiation is not produced directly by the fission event itself.

The produced radionuclides have varying half-lives, and therefore vary in radioactivity. For instance, strontium-89 and strontium-90 are produced in similar quantities in fission, and each nucleus decays by beta emission. But 90Sr has a 30-year half-life, and 89Sr a 50.5-day half-life. Thus in the 50.5 days it takes half the 89Sr atoms to decay, emitting the same number of beta particles as there were decays, less than 0.4% of the 90Sr atoms have decayed, emitting only 0.4% of the betas. The radioactive emission rate is highest for the shortest lived radionuclides, although they also decay the fastest. Additionally, less stable fission products are less likely to decay to stable nuclides, instead decaying to other radionuclides, which undergo further decay and radiation emission, adding to the radiation output. It is these short lived fission products that are the immediate hazard of spent fuel, and the energy output of the radiation also generates significant heat which must be considered when storing spent fuel. As there are hundreds of different radionuclides created, the initial radioactivity level fades quickly as short lived radionuclides decay, but never ceases completely as longer lived radionuclides make up more and more of the remaining unstable atoms.[1]

Formation and decay

The sum of the atomic mass of the two atoms produced by the fission of one fissile atom is always less than the atomic mass of the original atom. This is because some of the mass is lost as free neutrons, and once kinetic energy of the fission products has been removed (i.e., the products have been cooled to extract the heat provided by the reaction), then the mass associated with this energy is lost to the system also, and thus appears to be "missing" from the cooled fission products.

Since the nuclei that can readily undergo fission are particularly neutron-rich (e.g. 61% of the nucleons in uranium-235 are neutrons), the initial fission products are often more neutron-rich than stable nuclei of the same mass as the fission product (e.g. stable zirconium-90 is 56% neutrons compared to unstable strontium-90 at 58%). The initial fission products therefore may be unstable and typically undergo beta decay to move towards a stable configuration, converting a neutron to a proton with each beta emission. (Fission products do not decay via alpha decay.)

A few neutron-rich and short-lived initial fission products decay by ordinary beta decay (this is the source of perceptible half life, typically a few tenths of a second to a few seconds), followed by immediate emission of a neutron by the excited daughter-product. This process is the source of so-called delayed neutrons, which play an important role in control of a nuclear reactor.

The first beta decays are rapid and may release high energy beta particles or gamma radiation. However, as the fission products approach stable nuclear conditions, the last one or two decays may have a long half-life and release less energy.

Radioactivity over time

Fission products have half-lives of 90 years (samarium-151) or less, except for seven long-lived fission products that have half lives of 211,100 years (technetium-99) and more. Therefore, the total radioactivity of a mixture of pure fission products decreases rapidly for the first several hundred years (controlled by the short-lived products) before stabilizing at a low level that changes little for hundreds of thousands of years (controlled by the seven long-lived products).

This behavior of pure fission products with actinides removed, contrasts with the decay of fuel that still contains actinides. This fuel is produced in the so-called "open" (i.e., no nuclear reprocessing) nuclear fuel cycle. A number of these actinides have half lives in the missing range of about 100 to 200,000 years, causing some difficulty with storage plans in this time-range for open cycle non-reprocessed fuels.

Proponents of nuclear fuel cycles which aim to consume all their actinides by fission, such as the Integral Fast Reactor and molten salt reactor, use this fact to claim that within 200 years, their fuel wastes are no more radioactive than the original uranium ore.[2]

Fission products emit beta radiation, while actinides primarily emit alpha radiation. Many of each also emit gamma radiation.

Yield


Fission product yields by mass for thermal neutron fission of U-235, Pu-239, a combination of the two typical of current nuclear power reactors, and U-233 used in the thorium cycle.

Each fission of a parent atom produces a different set of fission product atoms. However, while an individual fission is not predictable, the fission products are statistically predictable. The amount of any particular isotope produced per fission is called its yield, typically expressed as percent per parent fission; therefore, yields total to just over 200% (because of ternary fissions), not 100%.

While fission products include every element from zinc through the lanthanides, the majority of the fission products occur in two peaks. One peak occurs at about (expressed by atomic number) strontium to ruthenium while the other peak is at about tellurium to neodymium. The yield is somewhat dependent on the parent atom and also on the energy of the initiating neutron.

In general the higher the energy of the state that undergoes nuclear fission, the more likely that the two fission products have similar mass. Hence as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow.[3] For instance, the curve of yield against mass for Pu-239 has a more shallow valley than that observed for U-235 when the neutrons are thermal neutrons. The curves for the fission of the later actinides tend to make even more shallow valleys. In extreme cases such as 259Fm, only one peak is seen.

The adjacent figure shows a typical fission product distribution from the fission of uranium. Note that in the calculations used to make this graph, the activation of fission products was ignored and the fission was assumed to occur in a single moment rather than a length of time. In this bar chart results are shown for different cooling times — time after fission. Because of the stability of nuclei with even numbers of protons and/or neutrons, the curve of yield against element is not a smooth curve but tends to alternate. Note that the curve against mass number is smooth.[4]

Production

Small amounts of fission products are naturally formed as the result of either spontaneous fission of natural uranium, which occurs at a low rate, or as a result of neutrons from radioactive decay or reactions with cosmic ray particles. The microscopic tracks left by these fission products in some natural minerals (mainly apatite and zircon) are used in fission track dating to provide the cooling (crystallization) ages of natural rocks. The technique has an effective dating range of 0.1 Ma to >1.0 Ga depending on the mineral used and the concentration of uranium in that mineral.

About 1.5 billion years ago in a uranium ore body in Africa, a natural nuclear fission reactor operated for a few hundred thousand years and produced approximately 5 tonnes of fission products. These fission products were important in providing proof that the natural reactor had occurred. Fission products are produced in nuclear weapon explosions, with the amount depending on the type of weapon. The largest source of fission products is from nuclear reactors. In current nuclear power reactors, about 3% of the uranium in the fuel is converted into fission products as a by-product of energy generation. Most of these fission products remain in the fuel unless there is fuel element failure or a nuclear accident, or the fuel is reprocessed.

Power reactors

In commercial nuclear fission reactors, the system is operated in the otherwise self-extinguishing prompt subcritical state. The reactor specific physical phenomena that nonetheless maintains the temperature above the decay heat level, are the predictably delayed,[5] and therefore easily controlled, transformations or movements of a vital class of fission product, or reaction ember, as they decay,[6] with Bromine-87 being one such long-lived "ember", with a half-life of about a minute and thus it emits a delayed neutron upon decay.[7] Operating in this delayed critical state, the dependence on the inherently delayed transformation or movement of fission products/embers to maintain the temperature, is a process that occurs slow enough to permit human feedback on the temperature control. In an analogous manner to fire dampers varying the opening to control the movement of wood embers towards new fuel, control rods are comparatively varied up or down, as the nuclear fuel burns up over time.[8][9][10][11]

In a nuclear power reactor, the main sources of radioactivity are fission products, alongside actinides and activation products. Fission products are the largest source of radioactivity for the first several hundred years, while actinides are dominant roughly 103 to 105 years after fuel use.

Fission occurs in the nuclear fuel, and the fission products are primarily retained within the fuel close to where they are produced. These fission products are important to the operation of the reactor because some fission products contribute delayed neutrons that are useful for reactor control while others are neutron poisons that tend to inhibit the nuclear reaction. The buildup of the fission product poisons is a key factor in determining the maximum duration a given fuel element can be kept within the reactor. The decay of short-lived fission products also provide a source of heat within the fuel that continues even after the reactor has been shut down and the fission reactions stopped. It is this decay heat that sets the requirements for cooling of a reactor after shutdown.

If the fuel cladding around the fuel develops holes, then fission products can leak into the primary coolant. Depending on the fission product chemistry, it may settle within the reactor core or travel through the coolant system. Coolant systems include chemistry control systems that tend to remove such fission products. In a well-designed power reactor running under normal conditions, the radioactivity of the coolant is very low.

It is known that the isotope responsible for the majority of the gamma exposure in fuel reprocessing plants (and the Chernobyl site in 2005) is Cs-137. Iodine-129 is one of the major radioactive elements released from reprocessing plants. In nuclear reactors both Cs-137 and strontium-90 are found in locations remote from the fuel. This is because these isotopes are formed by the beta decay of noble gases (xenon-137 {halflife of 3.8 minutes} and krypton-90 {halflife 32 seconds}) which enable these isotopes to be deposited in locations remote from the fuel (e.g. on control rods).

Nuclear reactor poisons

Some fission products decay with the release of a neutron. Since there may be a short delay in time between the original fission event (which releases its own prompt neutrons immediately) and the release of these neutrons, the latter are termed "delayed neutrons". These delayed neutrons are important to nuclear reactor control.

Some of the fission products, such as xenon-135 and samarium-149, have a high neutron absorption cross section. Since a nuclear reactor depends on a balance in the neutron production and absorption rates, those fission products that remove neutrons from the reaction will tend to shut the reactor down or "poison" the reactor. Nuclear fuels and reactors are designed to address this phenomenon through such features as burnable poisons and control rods. Build-up of xenon-135 during shutdown or low-power operation may poison the reactor enough to impede restart or to interfere with normal control of the reaction during restart or restoration of full power, possibly causing or contributing to an accident scenario.

Nuclear weapons

Nuclear weapons use fission as either the partial or the main energy source. Depending on the weapon design and where it is exploded, the relative importance of the fission product radioactivity will vary compared to the activation product radioactivity in the total fallout radioactivity.

The immediate fission products from nuclear weapon fission are essentially the same as those from any other fission source, depending slightly on the particular nuclide that is fissioning. However, the very short time scale for the reaction makes a difference in the particular mix of isotopes produced from an atomic bomb.

For example, the 134Cs/137Cs ratio provides an easy method of distinguishing between fallout from a bomb and the fission products from a power reactor. Almost no Cs-134 is formed by nuclear fission (because xenon-134 is stable). The 134Cs is formed by the neutron activation of the stable 133Cs which is formed by the decay of isotopes in the isobar (A = 133). So in a momentary criticality by the time that the neutron flux becomes zero too little time will have passed for any 133Cs to be present. While in a power reactor plenty of time exists for the decay of the isotopes in the isobar to form 133Cs, the 133Cs thus formed can then be activated to form 134Cs only if the time between the start and the end of the criticality is long.

According to Jiri Hala's textbook,[12] the radioactivity in the fission product mixture in an atom bomb is mostly caused by short-lived isotopes such as I-131 and Ba-140. After about four months Ce-141, Zr-95/Nb-95, and Sr-89 represent the largest share of radioactive material. After two to three years, Ce-144/Pr-144, Ru-106/Rh-106, and Promethium-147 are the bulk of the radioactivity. After a few years, the radiation is dominated by strontium-90 and caesium-137, whereas in the period between 10,000 and a million years it is technetium-99 that dominates.

Application

Some fission products (such as Cs-137) are used in medical and industrial radioactive sources. 99TcO4 ion can react with steel surfaces to form a corrosion resistant layer. In this way these metaloxo anions act as anodic corrosion inhibitors - it renders the steel surface passive. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has become lost prior to decontamination (e.g. nuclear submarine reactors which have been lost at sea).

In a similar way the release of radio-iodine in a serious power reactor accident could be retarded by adsorption on metal surfaces within the nuclear plant.[13] Much of the other work on the iodine chemistry which would occur during a bad accident has been done.[14]

Decay


The external gamma dose for a person in the open near the Chernobyl disaster site.

The portion of the total radiation dose (in air) contributed by each isotope versus time after the Chernobyl disaster, at the site thereof. Note that this image was drawn using data from the OECD report, and the second edition of 'The radiochemical manual'.[15]

For fission of uranium-235, the predominant radioactive fission products include isotopes of iodine, caesium, strontium, xenon and barium. The threat becomes smaller with the passage of time. Locations where radiation fields once posed immediate mortal threats, such as much of the Chernobyl Nuclear Power Plant on day one of the accident and the ground zero sites of U.S. atomic bombings in Japan (6 hours after detonation) are now relatively safe because the radioactivity has decayed to a low level. Many of the fission products decay through very short-lived isotopes to form stable isotopes, but a considerable number of the radioisotopes have half-lives longer than a day.

The radioactivity in the fission product mixture is initially mostly caused by short lived isotopes such as Iodine-131 and 140Ba; after about four months 141Ce, 95Zr/95Nb and 89Sr take the largest share, while after about two or three years the largest share is taken by 144Ce/144Pr, 106Ru/106Rh and 147Pm. Later 90Sr and 137Cs are the main radioisotopes, being succeeded by 99Tc. In the case of a release of radioactivity from a power reactor or used fuel, only some elements are released; as a result, the isotopic signature of the radioactivity is very different from an open air nuclear detonation, where all the fission products are dispersed.

Fallout countermeasures

The purpose of radiological emergency preparedness is to protect people from the effects of radiation exposure after a nuclear accident or bomb. Evacuation is the most effective protective measure. However, if evacuation is impossible or even uncertain, then local fallout shelters and other measures provide the best protection.[16]

Iodine


Per capita thyroid doses in the continental United States of iodine-131 resulting from all exposure routes from all atmospheric nuclear tests conducted at the Nevada Test Site. See also Downwinders.

At least three isotopes of iodine are important. 129I, 131I (radioiodine) and 132I. Open air nuclear testing and the Chernobyl disaster both released iodine-131.

The short-lived isotopes of iodine are particularly harmful because the thyroid collects and concentrates iodide – radioactive as well as stable. Absorption of radioiodine can lead to acute, chronic, and delayed effects. Acute effects from high doses include thyroiditis, while chronic and delayed effects include hypothyroidism, thyroid nodules, and thyroid cancer. It has been shown that the active iodine released from Chernobyl and Mayak[17] has resulted in an increase in the incidence of thyroid cancer in the former Soviet Union.

One measure which protects against the risk from radio-iodine is taking a dose of potassium iodide (KI) before exposure to radioiodine. The non-radioactive iodide 'saturates' the thyroid, causing less of the radioiodine to be stored in the body. Administering potassium iodide reduces the effects of radio-iodine by 99% and is a prudent, inexpensive supplement to fallout shelters. A low-cost alternative to commercially available iodine pills is a saturated solution of potassium iodide. Long-term storage of KI is normally in the form of reagent grade crystals.[18]

The administration of known goitrogen substances can also be used as a prophylaxis in reducing the bio-uptake of iodine, (whether it be the nutritional non-radioactive iodine-127 or radioactive iodine, radioiodine - most commonly iodine-131, as the body cannot discern between different iodine isotopes). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ions are a competitive inhibitor of the process by which iodide is actively deposited into thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland’s ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen).[19] The reduction of the iodide pool by perchlorate has dual effects – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland.[20]

Treatment of thyrotoxicosis (including Graves' disease) with 600-2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months or longer was once common practice, particularly in Europe,[19][21] and perchlorate use at lower doses to treat thryoid problems continues to this day.[22] Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/day was discovered not to control thyrotoxicosis in all subjects.[19][20]

Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days.[19][23]

Prophylaxis with perchlorate-containing water at concentrations of 17 ppm, which corresponds to 0.5 mg/kg-day personal intake, if one is 70 kg and consumes 2 litres of water per day, was found to reduce baseline radioiodine uptake by 67%[19] This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study where subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of iodine was observed.[24]

However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/kg-day, as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine.[25] this may well be attributable to sufficient daily exposure or intake of healthy iodine-127 among the workers and the short 8 hr biological half life of perchlorate in the body.[19]

To completely block the uptake of iodine-131 by the purposeful addition of perchlorate ions to a populace's water supply, aiming at dosages of 0.5 mg/kg-day, or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing radioiodine uptake. Perchlorate ion concentrations in a region's water supply would need to be much higher, at least 7.15 mg/kg of body weight per day, or a water concentration of 250 ppm, assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to a radioiodine environment,[19][23] independent of the availability of iodate or iodide drugs.

The continual distribution of perchlorate tablets or the addition of perchlorate to the water supply would need to continue for no less than 80–90 days, beginning immediately after the initial release of radioiodine was detected. After 80–90 days passed, released radioactive iodine-131 would have decayed to less than 0.1% of its initial quantity, at which time the danger from biouptake of iodine-131 is essentially over.[26]

In the event of a radioiodine release, the ingestion of prophylaxis potassium iodide, if available, or even iodate, would rightly take precedence over perchlorate administration, and would be the first line of defense in protecting the population from a radioiodine release. However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation.

The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism. In all these cases however, despite the risks, the prophylaxis benefits of intervention with iodide, iodate, or perchlorate outweigh the serious cancer risk from radioiodine bioaccumulation in regions where radioiodine has sufficiently contaminated the environment.

Cesium

The Chernobyl accident released a large amount of cesium isotopes which were dispersed over a wide area. 137Cs is an isotope which is of long-term concern as it remains in the top layers of soil. Plants with shallow root systems tend to absorb it for many years. Hence grass and mushrooms can carry a considerable amount of 137Cs, which can be transferred to humans through the food chain.

One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also the removal of top few centimeters of soil and its burial in a shallow trench will reduce the dose to humans and animals as the gamma photons from 137Cs will be attenuated by their passage through the soil. The deeper and more remote the trench is, the better the degree of protection. Fertilizers containing potassium can be used to dilute cesium and limit its uptake by plants.

In livestock farming, another countermeasure against 137Cs is to feed to animals prussian blue. This compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to consume several grams of prussian blue per day. The prussian blue reduces the biological half-life (different from the nuclear half-life) of the cesium. The physical or nuclear half-life of 137Cs is about 30 years. Cesium in humans normally has a biological half-life of between one and four months. An added advantage of the prussian blue is that the cesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the cesium from being recycled. The form of prussian blue required for the treatment of animals, including humans is a special grade. Attempts to use the pigment grade used in paints have not been successful.[27]

Strontium

The addition of lime to soils which are poor in calcium can reduce the uptake of strontium by plants. Likewise in areas where the soil is low in potassium, the addition of a potassium fertilizer can discourage the uptake of cesium into plants. However such treatments with either lime or potash should not be undertaken lightly as they can alter the soil chemistry greatly, so resulting in a change in the plant ecology of the land.[citation needed]

Health concerns

For introduction of radionuclides into organism, ingestion is the most important route. Insoluble compounds are not absorbed from the gut and cause only local irradiation before they are excreted. Soluble forms however show wide range of absorption percentages.[28]

Isotope Radiation Half-life GI absorption Notes
Strontium-90/yttrium-90 β 28 years 30%
Cesium-137 β,γ 30 years 100%
Promethium-147 β 2.6 years 0.01%
Cerium-144 β,γ 285 days 0.01%
Ruthenium-106/rhodium-106 β,γ 1.0 years 0.03%
Zirconium-95 β,γ 65 days 0.01%
Strontium-89 β 51 days 30%
Ruthenium-103 β,γ 39.7 days 0.03%
Niobium-95 β,γ 35 days 0.01%
Cerium-141 β,γ 33 days 0.01%
Barium-140/lanthanum-140 β,γ 12.8 days 5%
Iodine-131 β,γ 8.05 days 100%
Tritium β 13 years 100% [a]

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...