Search This Blog

Thursday, May 16, 2024

Public key infrastructure

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Public_key_infrastructure
Diagram of a public key infrastructure

A public key infrastructure (PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption.

The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.

In cryptography, a PKI is an arrangement that binds public keys with respective identities of entities (like people and organizations). The binding is established through a process of registration and issuance of certificates at and by a certificate authority (CA). Depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision. When done over a network, this requires using a secure certificate enrollment or certificate management protocol such as CMP.

The PKI role that may be delegated by a CA to assure valid and correct registration is called a registration authority (RA). An RA is responsible for accepting requests for digital certificates and authenticating the entity making the request. The Internet Engineering Task Force's RFC 3647 defines an RA as "An entity that is responsible for one or more of the following functions: the identification and authentication of certificate applicants, the approval or rejection of certificate applications, initiating certificate revocations or suspensions under certain circumstances, processing subscriber requests to revoke or suspend their certificates, and approving or rejecting requests by subscribers to renew or re-key their certificates. RAs, however, do not sign or issue certificates (i.e., an RA is delegated certain tasks on behalf of a CA)." While Microsoft may have referred to a subordinate CA as an RA, this is incorrect according to the X.509 PKI standards. RAs do not have the signing authority of a CA and only manage the vetting and provisioning of certificates. So in the Microsoft PKI case, the RA functionality is provided either by the Microsoft Certificate Services web site or through Active Directory Certificate Services which enforces Microsoft Enterprise CA, and certificate policy through certificate templates and manages certificate enrollment (manual or auto-enrollment). In the case of Microsoft Standalone CAs, the function of RA does not exist since all of the procedures controlling the CA are based on the administration and access procedure associated with the system hosting the CA and the CA itself rather than Active Directory. Most non-Microsoft commercial PKI solutions offer a stand-alone RA component.

An entity must be uniquely identifiable within each CA domain on the basis of information about that entity. A third-party validation authority (VA) can provide this entity information on behalf of the CA.

The X.509 standard defines the most commonly used format for public key certificates.

Capabilities

PKI provides "trust services" - in plain terms trusting the actions or outputs of entities, be they people or computers. Trust service objectives respect one or more of the following capabilities: Confidentiality, Integrity and Authenticity (CIA).

Confidentiality: Assurance that no entity can maliciously or unwittingly view a payload in clear text. Data is encrypted to make it secret, such that even if it was read, it appears as gibberish. Perhaps the most common use of PKI for confidentiality purposes is in the context of Transport Layer Security (TLS). TLS is a capability underpinning the security of data in transit, i.e. during transmission. A classic example of TLS for confidentiality is when using an internet browser to log on to a service hosted on an internet based web site by entering a password.

Integrity: Assurance that if an entity changed (tampered) with transmitted data in the slightest way, it would be obvious it happened as its integrity would have been compromised. Often it is not of utmost importance to prevent the integrity being compromised (tamper proof), however, it is of utmost importance that if integrity is compromised there is clear evidence of it having done so (tamper evident).

Authenticity: Assurance that every entity has certainty of what it is connecting to, or can evidence its legitimacy when connecting to a protected service. The former is termed server-side authentication - typically used when authenticating to a web server using a password. The latter is termed client-side authentication - sometimes used when authenticating using a smart card (hosting a digital certificate and private key).

Design

Public-key cryptography is a cryptographic technique that enables entities to securely communicate on an insecure public network, and reliably verify the identity of an entity via digital signatures.

A public key infrastructure (PKI) is a system for the creation, storage, and distribution of digital certificates which are used to verify that a particular public key belongs to a certain entity. The PKI creates digital certificates which map public keys to entities, securely stores these certificates in a central repository and revokes them if needed.

A PKI consists of:

  • A certificate authority (CA) that stores, issues and signs the digital certificates;
  • A registration authority (RA) which verifies the identity of entities requesting their digital certificates to be stored at the CA;
  • A central directory—i.e., a secure location in which keys are stored and indexed;
  • A certificate management system managing things like the access to stored certificates or the delivery of the certificates to be issued;
  • A certificate policy stating the PKI's requirements concerning its procedures. Its purpose is to allow outsiders to analyze the PKI's trustworthiness.

Methods of certification

Broadly speaking, there have traditionally been three approaches to getting this trust: certificate authorities (CAs), web of trust (WoT), and simple public-key infrastructure (SPKI).

Certificate authorities

The primary role of the CA is to digitally sign and publish the public key bound to a given user. This is done using the CA's own private key, so that trust in the user key relies on one's trust in the validity of the CA's key. When the CA is a third party separate from the user and the system, then it is called the Registration Authority (RA), which may or may not be separate from the CA. The key-to-user binding is established, depending on the level of assurance the binding has, by software or under human supervision.

The term trusted third party (TTP) may also be used for certificate authority (CA). Moreover, PKI is itself often used as a synonym for a CA implementation.

Certificate revocation

A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or mis-issued certificate until expiry. Hence, revocation is an important part of a public key infrastructure. Revocation is performed by the issuing certificate authority, which produces a cryptographically authenticated statement of revocation.

For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns. If revocation information is unavailable (either due to accident or an attack), clients must decide whether to fail-hard and treat a certificate as if it is revoked (and so degrade availability) or to fail-soft and treat it as unrevoked (and allow attackers to sidestep revocation).

Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services, Web browsers limit the revocation checks they will perform, and will fail-soft where they do. Certificate revocation lists are too bandwidth-costly for routine use, and the Online Certificate Status Protocol presents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.

Issuer market share

In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate.

According to NetCraft report from 2015, the industry standard for monitoring active Transport Layer Security (TLS) certificates, states that "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec, Sectigo, GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (or VeriSign before it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share."

Following major issues in how certificate issuing were managed, all major players gradually distrusted Symantec issued certificates, starting in 2017 and completed in 2021.

Temporary certificates and single sign-on

This approach involves a server that acts as an offline certificate authority within a single sign-on system. A single sign-on server will issue digital certificates into the client system, but never stores them. Users can execute programs, etc. with the temporary certificate. It is common to find this solution variety with X.509-based certificates.

Starting Sep 2020, TLS Certificate Validity reduced to 13 Months.

Web of trust

An alternative approach to the problem of public authentication of public key information is the web-of-trust scheme, which uses self-signed certificates and third-party attestations of those certificates. The singular term "web of trust" does not imply the existence of a single web of trust, or common point of trust, but rather one of any number of potentially disjoint "webs of trust". Examples of implementations of this approach are PGP (Pretty Good Privacy) and GnuPG (an implementation of OpenPGP, the standardized specification of PGP). Because PGP and implementations allow the use of e-mail digital signatures for self-publication of public key information, it is relatively easy to implement one's own web of trust.

One of the benefits of the web of trust, such as in PGP, is that it can interoperate with a PKI CA fully trusted by all parties in a domain (such as an internal CA in a company) that is willing to guarantee certificates, as a trusted introducer. If the "web of trust" is completely trusted then, because of the nature of a web of trust, trusting one certificate is granting trust to all the certificates in that web. A PKI is only as valuable as the standards and practices that control the issuance of certificates and including PGP or a personally instituted web of trust could significantly degrade the trustworthiness of that enterprise's or domain's implementation of PKI.

The web of trust concept was first put forth by PGP creator Phil Zimmermann in 1992 in the manual for PGP version 2.0:

As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

Simple public key infrastructure

Another alternative, which does not deal with public authentication of public key information, is the simple public key infrastructure (SPKI) that grew out of three independent efforts to overcome the complexities of X.509 and PGP's web of trust. SPKI does not associate users with persons, since the key is what is trusted, rather than the person. SPKI does not use any notion of trust, as the verifier is also the issuer. This is called an "authorization loop" in SPKI terminology, where authorization is integral to its design. This type of PKI is specially useful for making integrations of PKI that do not rely on third parties for certificate authorization, certificate information, etc.; a good example of this is an air-gapped network in an office.

Decentralized PKI

Decentralized identifiers (DIDs) eliminates dependence on centralized registries for identifiers as well as centralized certificate authorities for key management, which is the standard in hierarchical PKI. In cases where the DID registry is a distributed ledger, each entity can serve as its own root authority. This architecture is referred to as decentralized PKI (DPKI).

History

Developments in PKI occurred in the early 1970s at the British intelligence agency GCHQ, where James Ellis, Clifford Cocks and others made important discoveries related to encryption algorithms and key distribution. Because developments at GCHQ are highly classified, the results of this work were kept secret and not publicly acknowledged until the mid-1990s.

The public disclosure of both secure key exchange and asymmetric key algorithms in 1976 by Diffie, Hellman, Rivest, Shamir, and Adleman changed secure communications entirely. With the further development of high-speed digital electronic communications (the Internet and its predecessors), a need became evident for ways in which users could securely communicate with each other, and as a further consequence of that, for ways in which users could be sure with whom they were actually interacting.

Assorted cryptographic protocols were invented and analyzed within which the new cryptographic primitives could be effectively used. With the invention of the World Wide Web and its rapid spread, the need for authentication and secure communication became still more acute. Commercial reasons alone (e.g., e-commerce, online access to proprietary databases from web browsers) were sufficient. Taher Elgamal and others at Netscape developed the SSL protocol ('https' in Web URLs); it included key establishment, server authentication (prior to v3, one-way only), and so on. A PKI structure was thus created for Web users/sites wishing secure communications.

Vendors and entrepreneurs saw the possibility of a large market, started companies (or new projects at existing companies), and began to agitate for legal recognition and protection from liability. An American Bar Association technology project published an extensive analysis of some of the foreseeable legal aspects of PKI operations (see ABA digital signature guidelines), and shortly thereafter, several U.S. states (Utah being the first in 1995) and other jurisdictions throughout the world began to enact laws and adopt regulations. Consumer groups raised questions about privacy, access, and liability considerations, which were more taken into consideration in some jurisdictions than in others.

The enacted laws and regulations differed, there were technical and operational problems in converting PKI schemes into successful commercial operation, and progress has been much slower than pioneers had imagined it would be.

By the first few years of the 21st century, the underlying cryptographic engineering was clearly not easy to deploy correctly. Operating procedures (manual or automatic) were not easy to correctly design (nor even if so designed, to execute perfectly, which the engineering required). The standards that existed were insufficient.

PKI vendors have found a market, but it is not quite the market envisioned in the mid-1990s, and it has grown both more slowly and in somewhat different ways than were anticipated. PKIs have not solved some of the problems they were expected to, and several major vendors have gone out of business or been acquired by others. PKI has had the most success in government implementations; the largest PKI implementation to date is the Defense Information Systems Agency (DISA) PKI infrastructure for the Common Access Cards program.

Uses

PKIs of one type or another, and from any of several vendors, have many uses, including providing public keys and bindings to user identities which are used for:

Open source implementations

  • OpenSSL is the simplest form of CA and tool for PKI. It is a toolkit, developed in C, that is included in all major Linux distributions, and can be used both to build your own (simple) CA and to PKI-enable applications. (Apache licensed)
  • EJBCA is a full-featured, enterprise-grade, CA implementation developed in Java. It can be used to set up a CA both for internal use and as a service. (LGPL licensed)
  • XiPKI, CA and OCSP responder. With SHA-3 support, implemented in Java. (Apache licensed)
  • XCA is a graphical interface, and database. XCA uses OpenSSL for the underlying PKI operations.
  • DogTag is a full featured CA developed and maintained as part of the Fedora Project.
  • CFSSL open source toolkit developed by CloudFlare for signing, verifying, and bundling TLS certificates. (BSD 2-clause licensed)
  • Vault tool for securely managing secrets (TLS certificates included) developed by HashiCorp. (Mozilla Public License 2.0 licensed)
  • Boulder, an ACME-based CA written in Go. Boulder is the software that runs Let's Encrypt.

Criticism

Some argue that purchasing certificates for securing websites by SSL/TLS and securing software by code signing is a costly venture for small businesses. However, the emergence of free alternatives, such as Let's Encrypt, has changed this. HTTP/2, the latest version of HTTP protocol, allows unsecured connections in theory; in practice, major browser companies have made it clear that they would support this protocol only over a PKI secured TLS connection. Web browser implementation of HTTP/2 including Chrome, Firefox, Opera, and Edge supports HTTP/2 only over TLS by using the ALPN extension of the TLS protocol. This would mean that, to get the speed benefits of HTTP/2, website owners would be forced to purchase SSL/TLS certificates controlled by corporations.

Currently the majority of web browsers are shipped with pre-installed intermediate certificates issued and signed by a certificate authority, by public keys certified by so-called root certificates. This means browsers need to carry a large number of different certificate providers, increasing the risk of a key compromise.

When a key is known to be compromised, it could be fixed by revoking the certificate, but such a compromise is not easily detectable and can be a huge security breach. Browsers have to issue a security patch to revoke intermediary certificates issued by a compromised root certificate authority.

Digital signature

From Wikipedia, the free encyclopedia
Alice signs a message—"Hello Bob!"—by appending a signature computed from the message and her private key. Bob receives the message, including the signature, and using Alice's public key, verifies the authenticity of the signed message.
Alice signs a message—"Hello Bob!"—by appending a signature computed from the message and her private key. Bob receives both the message and signature. He uses Alice's public key to verify the authenticity of the signed message.

A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature on a message gives a recipient confidence that the message came from a sender known to the recipient.

Digital signatures are a standard element of most cryptographic protocol suites, and are commonly used for software distribution, financial transactions, contract management software, and in other cases where it is important to detect forgery or tampering.

Digital signatures are often used to implement electronic signatures, which include any electronic data that carries the intent of a signature, but not all electronic signatures use digital signatures. Electronic signatures have legal significance in some countries, including Brazil, Canada, South Africa, the United States, Algeria, Turkey, India, Indonesia, Mexico, Saudi Arabia, Uruguay, Switzerland, Chile and the countries of the European Union.

Digital signatures employ asymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a non-secure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, in the sense used here, are cryptographically based, and must be implemented properly to be effective. They can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret. Further, some non-repudiation schemes offer a timestamp for the digital signature, so that even if the private key is exposed, the signature is valid. Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.

Definition

A digital signature scheme typically consists of three algorithms:

  • A key generation algorithm that selects a private key uniformly at random from a set of possible private keys. The algorithm outputs the private key and a corresponding public key.
  • A signing algorithm that, given a message and a private key, produces a signature.
  • A signature verifying algorithm that, given the message, public key and signature, either accepts or rejects the message's claim to authenticity.

Two main properties are required:

First, the authenticity of a signature generated from a fixed message and fixed private key can be verified by using the corresponding public key.

Secondly, it should be computationally infeasible to generate a valid signature for a party without knowing that party's private key. A digital signature is an authentication mechanism that enables the creator of the message to attach a code that acts as a signature. The Digital Signature Algorithm (DSA), developed by the National Institute of Standards and Technology, is one of many examples of a signing algorithm.

In the following discussion, 1n refers to a unary number.

Formally, a digital signature scheme is a triple of probabilistic polynomial time algorithms, (G, S, V), satisfying:

  • G (key-generator) generates a public key (pk), and a corresponding private key (sk), on input 1n, where n is the security parameter.
  • S (signing) returns a tag, t, on the inputs: the private key (sk), and a string (x).
  • V (verifying) outputs accepted or rejected on the inputs: the public key (pk), a string (x), and a tag (t).

For correctness, S and V must satisfy

Pr [ (pk, sk) ← G(1n), V( pk, x, S(sk, x) ) = accepted ] = 1.

A digital signature scheme is secure if for every non-uniform probabilistic polynomial time adversary, A

Pr [ (pk, sk) ← G(1n), (x, t) ← AS(sk, · )(pk, 1n), xQ, V(pk, x, t) = accepted] < negl(n),

where AS(sk, · ) denotes that A has access to the oracle, S(sk, · ), Q denotes the set of the queries on S made by A, which knows the public key, pk, and the security parameter, n, and xQ denotes that the adversary may not directly query the string, x, on S.

History

In 1976, Whitfield Diffie and Martin Hellman first described the notion of a digital signature scheme, although they only conjectured that such schemes existed based on functions that are trapdoor one-way permutations. Soon afterwards, Ronald Rivest, Adi Shamir, and Len Adleman invented the RSA algorithm, which could be used to produce primitive digital signatures (although only as a proof-of-concept – "plain" RSA signatures are not secure). The first widely marketed software package to offer digital signature was Lotus Notes 1.0, released in 1989, which used the RSA algorithm.

Other digital signature schemes were soon developed after RSA, the earliest being Lamport signatures, Merkle signatures (also known as "Merkle trees" or simply "Hash trees"), and Rabin signatures.

In 1988, Shafi Goldwasser, Silvio Micali, and Ronald Rivest became the first to rigorously define the security requirements of digital signature schemes. They described a hierarchy of attack models for signature schemes, and also presented the GMR signature scheme, the first that could be proved to prevent even an existential forgery against a chosen message attack, which is the currently accepted security definition for signature schemes. The first such scheme which is not built on trapdoor functions but rather on a family of function with a much weaker required property of one-way permutation was presented by Moni Naor and Moti Yung.

Method

One digital signature scheme (of many) is based on RSA. To create signature keys, generate an RSA key pair containing a modulus, N, that is the product of two random secret distinct large primes, along with integers, e and d, such that e d  1 (mod φ(N)), where φ is Euler's totient function. The signer's public key consists of N and e, and the signer's secret key contains d.

Used directly, this type of signature scheme is vulnerable to key-only existential forgery attack. To create a forgery, the attacker picks a random signature σ and uses the verification procedure to determine the message, m, corresponding to that signature. In practice, however, this type of signature is not used directly, but rather, the message to be signed is first hashed to produce a short digest, that is then padded to larger width comparable to N, then signed with the reverse trapdoor function. This forgery attack, then, only produces the padded hash function output that corresponds to σ, but not a message that leads to that value, which does not lead to an attack. In the random oracle model, hash-then-sign (an idealized version of that practice where hash and padding combined have close to N possible outputs), this form of signature is existentially unforgeable, even against a chosen-plaintext attack.

There are several reasons to sign such a hash (or message digest) instead of the whole document.

For efficiency
The signature will be much shorter and thus save time since hashing is generally much faster than signing in practice.
For compatibility
Messages are typically bit strings, but some signature schemes operate on other domains (such as, in the case of RSA, numbers modulo a composite number N). A hash function can be used to convert an arbitrary input into the proper format.
For integrity
Without the hash function, the text "to be signed" may have to be split (separated) in blocks small enough for the signature scheme to act on them directly. However, the receiver of the signed blocks is not able to recognize if all the blocks are present and in the appropriate order.

Applications

As organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of an electronic document as well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office (GPO) publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State, University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures.

Below are some common reasons for applying a digital signature to communications:

Authentication

A message may have letterhead or a handwritten signature identifying its sender, but letterheads and handwritten signatures can be copied and pasted onto forged messages. Even legitimate messages may be modified in transit.

If a bank's central office receives a letter claiming to be from a branch office with instructions to change the balance of an account, the central bankers need to be sure, before acting on the instructions, that they were actually sent by a branch banker, and not forged—whether a forger fabricated the whole letter, or just modified an existing letter in transit by adding some digits.

With a digital signature scheme, the central office can arrange beforehand to have a public key on file whose private key is known only to the branch office. The branch office can later sign a message and the central office can use the public key to verify the signed message was not a forgery before acting on it. A forger who doesn't know the sender's private key can't sign a different message, or even change a single digit in an existing message without making the recipient's signature verification fail.

Encryption can hide the content of the message from an eavesdropper, but encryption on its own may not let recipient verify the message's authenticity, or even detect selective modifications like changing a digit—if the bank's offices simply encrypted the messages they exchange, they could still be vulnerable to forgery. In other applications, such as software updates, the messages are not secret—when a software author publishes a patch for all existing installations of the software to apply, the patch itself is not secret, but computers running the software must verify the authenticity of the patch before applying it, lest they become victims to malware.

Limitations

Replays. A digital signature scheme on its own does not prevent a valid signed message from being recorded and then maliciously reused in a replay attack. For example, the branch office may legitimately request that bank transfer be issued once in a signed message. If the bank doesn't use a system of transaction ids in their messages to detect which transfers have already happened, someone could illegitimately reuse the same signed message many times to drain an account.

Uniqueness and malleability of signatures. A signature itself cannot be used to uniquely identify the message it signs—in some signature schemes, every message has a large number of possible valid signatures from the same signer, and it may be easy, even without knowledge of the private key, to transform one valid signature into another. If signatures are misused as transaction ids in an attempt by a bank-like system such as a Bitcoin exchange to detect replays, this can be exploited to replay transactions.

Authenticating a public key. Prior knowledge of a public key can be used to verify authenticity of a signed message, but not the other way around—prior knowledge of a signed message cannot be used to verify authenticity of a public key. In some signature schemes, given a signed message, it is easy to construct a public key under which the signed message will pass verification, even without knowledge of the private key that was used to make the signed message in the first place.

Non-repudiation

Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.

Note that these authentication, non-repudiation etc. properties rely on the secret key not having been revoked prior to its usage. Public revocation of a key-pair is a required ability, else leaked secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation status requires an "online" check; e.g., checking a certificate revocation list or via the Online Certificate Status Protocol. Very roughly this is analogous to a vendor who receives credit-cards first checking online with the credit-card issuer to find if a given card has been reported lost or stolen. Of course, with stolen key pairs, the theft is often discovered only after the secret key's use, e.g., to sign a bogus certificate for espionage purpose.

Notions of security

In their foundational paper, Goldwasser, Micali, and Rivest lay out a hierarchy of attack models against digital signatures:

  1. In a key-only attack, the attacker is only given the public verification key.
  2. In a known message attack, the attacker is given valid signatures for a variety of messages known by the attacker but not chosen by the attacker.
  3. In an adaptive chosen message attack, the attacker first learns signatures on arbitrary messages of the attacker's choice.

They also describe a hierarchy of attack results:

  1. A total break results in the recovery of the signing key.
  2. A universal forgery attack results in the ability to forge signatures for any message.
  3. A selective forgery attack results in a signature on a message of the adversary's choice.
  4. An existential forgery merely results in some valid message/signature pair not already known to the adversary.

The strongest notion of security, therefore, is security against existential forgery under an adaptive chosen message attack.

Additional security precautions

Putting the private key on a smart card

All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored on a user's computer, and protected by a local password, but this has two disadvantages:

  • the user can only sign documents on that particular computer
  • the security of the private key depends entirely on the security of the computer

A more secure alternative is to store the private key on a smart card. Many smart cards are designed to be tamper-resistant (although some designs have been broken, notably by Ross Anderson and his students). In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU signs the hash using the stored private key of the user, and then returns the signed hash. Typically, a user must activate their smart card by entering a personal identification number or PIN code (thus providing two-factor authentication). It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thief will still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easier to copy, and such compromises are far more difficult to detect.

Using smart card readers with a separate keyboard

Entering a PIN code to activate the smart card commonly requires a numeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running a keystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are often EAL3 certified.

Other smart card designs

Smart card design is an active field, and there are smart card schemes which are intended to avoid these particular problems, despite having few security proofs so far.

Using digital signatures only with trusted applications

One of the main differences between a digital signature and a written signature is that the user does not "see" what they sign. The user application presents a hash code to be signed by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application.

To protect against this scenario, an authentication system can be set up between the user's application (word processor, email client, etc.) and the signing application. The general idea is to provide some means for both the user application and signing application to verify each other's integrity. For example, the signing application may require all requests to come from digitally signed binaries.

Using a network attached hardware security module

One of the main differences between a cloud based digital signature service and a locally provided one is risk. Many risk averse companies, including governments, financial and medical institutions, and payment processors require more secure standards, like FIPS 140-2 level 3 and FIPS 201 certification, to ensure the signature is validated and secure.

WYSIWYS

Technically speaking, a digital signature applies to a string of bits, whereas humans and applications "believe" that they sign the semantic interpretation of those bits. In order to be semantically interpreted, the bit string must be transformed into a form that is meaningful for humans and applications, and this is done through a combination of hardware and software based processes on a computer system. The problem is that the semantic interpretation of bits can change as a function of the processes used to transform the bits into semantic content. It is relatively easy to change the interpretation of a digital document by implementing changes on the computer system where the document is being processed. From a semantic perspective this creates uncertainty about what exactly has been signed. WYSIWYS (What You See Is What You Sign) means that the semantic interpretation of a signed message cannot be changed. In particular this also means that a message cannot contain hidden information that the signer is unaware of, and that can be revealed after the signature has been applied. WYSIWYS is a requirement for the validity of digital signatures, but this requirement is difficult to guarantee because of the increasing complexity of modern computer systems. The term WYSIWYS was coined by Peter Landrock and Torben Pedersen to describe some of the principles in delivering secure and legally binding digital signatures for Pan-European projects.

Digital signatures versus ink on paper signatures

An ink signature could be replicated from one document to another by copying the image manually or digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or technical skill, and to produce ink signature copies that resist professional scrutiny is very difficult.

Digital signatures cryptographically bind an electronic identity to an electronic document and the digital signature cannot be copied to another document. Paper contracts sometimes have the ink signature block on the last page, and the previous pages may be replaced after a signature is applied. Digital signatures can be applied to an entire document, such that the digital signature on the last page will indicate tampering if any data on any of the pages have been altered, but this can also be achieved by signing with ink and numbering all pages of the contract.

Some digital signature algorithms

The current state of use – legal and practical

Most digital signature schemes share the following goals regardless of cryptographic theory or legal provision:

  1. Quality algorithms: Some public-key algorithms are known to be insecure, as practical attacks against them have been discovered.
  2. Quality implementations: An implementation of a good algorithm (or protocol) with mistake(s) will not work.
  3. Users (and their software) must carry out the signature protocol properly.
  4. The private key must remain private: If the private key becomes known to any other party, that party can produce perfect digital signatures of anything.
  5. The public key owner must be verifiable: A public key associated with Bob actually came from Bob. This is commonly done using a public key infrastructure (PKI) and the public key↔user association is attested by the operator of the PKI (called a certificate authority). For 'open' PKIs in which anyone can request such an attestation (universally embodied in a cryptographically protected public key certificate), the possibility of mistaken attestation is non-trivial. Commercial PKI operators have suffered several publicly known problems. Such mistakes could lead to falsely signed, and thus wrongly attributed, documents. 'Closed' PKI systems are more expensive, but less easily subverted in this way.

Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactment cannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality.

Legislatures, being importuned by businesses expecting to profit from operating a PKI, or by the technological avant-garde advocating new solutions to old problems, have enacted statutes and/or regulations in many jurisdictions authorizing, endorsing, encouraging, or permitting digital signatures and providing for (or limiting) their legal effect. The first appears to have been in Utah in the United States, followed closely by the states Massachusetts and California. Other countries have also passed statutes or issued regulations in this area as well and the UN has had an active model law project for some time. These enactments (or proposed enactments) vary from place to place, have typically embodied expectations at variance (optimistically or pessimistically) with the state of the underlying cryptographic engineering, and have had the net effect of confusing potential users and specifiers, nearly all of whom are not cryptographically knowledgeable.

Adoption of technical standards for digital signatures have lagged behind much of the legislation, delaying a more or less unified engineering position on interoperability, algorithm choice, key lengths, and so on what the engineering is attempting to provide.

Industry standards

Some industries have established common interoperability standards for the use of digital signatures between members of the industry and with regulators. These include the Automotive Network Exchange for the automobile industry and the SAFE-BioPharma Association for the healthcare industry.

Using separate key pairs for signing and encryption

In several countries, a digital signature has a status somewhat like that of a traditional pen and paper signature, as in the 1999 EU digital signature directive and 2014 EU follow-on legislation. Generally, these provisions mean that anything digitally signed legally binds the signer of the document to the terms therein. For that reason, it is often thought best to use separate key pairs for encrypting and signing. Using the encryption key pair, a person can engage in an encrypted conversation (e.g., regarding a real estate transaction), but the encryption does not legally sign every message he or she sends. Only when both parties come to an agreement do they sign a contract with their signing keys, and only then are they legally bound by the terms of a specific document. After signing, the document can be sent over the encrypted link. If a signing key is lost or compromised, it can be revoked to mitigate any future transactions. If an encryption key is lost, a backup or key escrow should be utilized to continue viewing encrypted content. Signing keys should never be backed up or escrowed unless the backup destination is securely encrypted.

Wednesday, May 15, 2024

Green development

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Green_development

Green development is a real estate development concept that considers social and environmental impacts of development. It is defined by three sub-categories: environmental responsiveness, resource efficiency, and community and cultural sensitivity. Environmental responsiveness respects the intrinsic value of nature, and minimizes damage to an ecosystem. Resource efficiency refers to the use of fewer resources to conserve energy and the environment. Community and cultural sensitivity recognizes the unique cultural values that each community hosts and considers them in real estate development, unlike more discernable signs of sustainability, like solar energy, (solar panels are more visibly "green" than the use of local materials). Green development manifests itself in various forms, however it is generally based on solution multipliers: features of a project that provide additional benefits, which ultimately reduce the projects' environmental impacts.

History

Green development emerged as a result of the environmental movement in the 1970s. In the real estate industry, use of the term commenced in 1987 with a report from the World Commission on Environment and Development, entitled "Our Common Future". The report includes 16 principles of environmental management, designed to foster green development. It also discusses the traditional model of macroeconomic growth, and its disregard for environmental consequences. Following this initial movement, the real estate industry experienced a back-and-forth relationship with "green" methodologies; environmental issues often came second behind purely economic factors. Incessant environmental concern and legislation affecting the real estate sector began to emerge, i.e. Green development. However, a common concern of green development is that it may increase project costs and completion times. Hence there has been an ongoing argument of whether green strategies can be sustainable as well as economically stimulating. National environmental attention has since worked its way down to real estate developers, and become an increasing priority. Developers today must work within the parameters of legislation that now considers the environmental implications of development.

Legislation

In response to increasing public concern regarding environmental issues, governments have enacted legislation that regulates various aspects of the real estate industry, as well as other sectors of the economy. In the United States such legislation includes the National Environmental Policy Act (NEPA), the Clean Air Act, the Clean Water Act, the Coastal Zone Management Act, and the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA; also known as "Superfund."). NEPA, enacted in 1970, changed how federal agencies made decisions because it required them to propose environmental analysis before starting a project. The Clean Air Act (1970) requires the United States Environmental Protection Agency (EPA) to set national standards for clean air and pollution control regulations. The Clean Water Act (1972) was designed to minimize pollution in natural bodies of water, and also to ensure water quality that protects drinking water sources and supports recreational activities such as fishing or swimming. The Coastal Zone Management Act (1972) manages the nation's coastal resources such as Great Lakes and estuaries. CERCLA is commonly referred to as "Superfund" because it comprises two trust funds that provide help to improve areas that have been polluted by hazardous waste. The Superfund Amendments and Reauthorization Act of 1986 allows the government to lien a property that is being cleaned up. Additionally, the California Environmental Quality Act (CEQA) is California's most comprehensive piece of legislation regarding the environment. This act applies to all decisions made by cities and counties in California, and includes the mandate of an Environmental Impact Report (EIR), to both public and private projects. Subsequently, any new real estate development is subject to a detailed environmental analysis before starting a project.

California's Senate Bill (SB) 375 (2008) is another piece of legislation that promotes green development. It aims to achieve California's climate goals via more efficient land use and development patterns. More specifically, SB 375 seeks to reduce greenhouse gas (GHG) emissions through close coordination between land-use and transportation. One way this is achieved is through demand-side measures. This strategy would decrease driving demand, and therefore reduce vehicle miles traveled (VMT), and ultimately reduce GHG emissions. For example, a demand-side policy component may include placing public transit stops near development, in order to maximize walkability.

Additionally, California state law requires minimum energy conservation levels for all new and/or existing development projects. The seller of a home is required to include information regarding energy conservation retrofitting and thermal insulation in the sales contract.

In practice

Broad development patterns

Senate Bill 375 demonstrates an urban planning strategy called growth management. It is defined by a close coordination between land-use controls and capital investment and heavily motivated by environmental issues. It is defined by "the regulation of the amount, timing, location, and character of development." As the name may suggest, growth management may not imply limiting any growth. "Growth control" carries the connotation of managing or limiting growth, and "no growth" would indicate stopping growth all together. Moreover, Growth Management requires the cooperation of all three of these connotations.

Urban Growth Boundaries (UBG's), are popular growth management strategies. They are designed to encourage growth within a given boundary and discourage it outside the boundary. The goal of the UGB is to promote dense development, in order to decrease urban sprawl. This growth management technique ultimately seeks to revitalize central cities, and create vibrant, walk-able spaces for community development.

These clustered development patterns are solution multipliers. Reducing demand for infrastructure can save money and resources. These multipliers can increase walkability, which fosters social interaction and community togetherness.

Intelligent building

In the US, commercial and residential buildings are the highest consumers of electricity and HVAC systems comprise a large portion of this usage. In fact, the US Department of Energy projects that 70% of the electricity used in the US is from buildings. Intelligent building methods such as occupancy detection systems, wireless sensor networks, and HVAC control systems aim to more efficient energy usage. A team of researchers at the University of San Diego, project that their smart building automation systems will save 10–15% in building energy.

Case studies

The Holly Street Village Apartments

The city of Pasadena, California has recently adopted a general plan based on seven guiding principles: community needs and quality of life, preservation of Pasadena's historic character, economic vitality, a healthy family community, lack of need for automobiles, promoted as a cultural, scientific, corporate, entertainment and educational center for the region, and community participation.

This project included environmental concern and social considerations in the process of construction. The Holly Street Village Apartments in Pasadena address several of the principles outlined in Pasadena's general plan. It incorporates mixed-use development with ground-floor retail center including a deli, a convenience store and an art gallery. Also, the Holly Street Village Apartments is located near a light rail station. The goal of these strategies is to reduce the demand for automobiles, and making it easier for people to use public transportation.

Inn of the Anasazi

Zimmer Associates International, a real estate development firm, completed the Inn of the Anasazi in Santa Fe, New Mexico in 1991. Robert Zimmer (co-founder) and his partners, Steve Conger and Michael Fuller, set a goal to construct a building that would, "showcase energy- and resource-saving technologies, strengthen local community, offer first class elegance, and financially reward its participants." The interior design of the hotel pays respect to the ancient Anasazi Indians, including locally crafted furniture, hand-made rugs, and Native American, Hispanic and cowboy wall art. The use of adobe on the exterior of the hotel includes the historic pueblo style. Also, Zimmer and his partners repurposed a steel-framed building that had previously been used in the 1960s as a juvenile detention center, instead of starting the project from the beginning. Other "green" characteristics of this Santa Fe hotel are skylights, energy-efficient lighting, and water-saving fixtures. Also, the Inn of the Anasazi stimulates the regional economy by purchasing locally grown organic food from Hispanic farmers. Lastly the Inn encourage that the staff participate in local nonprofit organizations and events that sponsor diverse local cultures. The Inn of the Anasazi integrates "social and environmental goals with financial considerations…"

Taipei 101

Taipei 101, stylized as TAIPEI 101, is a 1,667 feet (508 m) tall skyscraper located in Taipei, Taiwan which has received LEED (Leadership in Energy and Environmental Design) certification from the U.S. Green Building Council as the highest score in history. In this project, "TAIPEI" is an acronym for "technology," "art," "innovation," "people," "environment," and "identity. Dr. Hubert Keiber, CEO of Siemens Building Automation in Taipei, stresses energy efficiency on the grounds that energy is the single largest expense for commercial buildings. Since 2011, the Taipei 101 has become the world's most environmentally responsible skyscraper, reducing water use, energy use, and carbon emissions all by 10%.

Boulder, Colorado

Growth management/limitation (discussed previously) manifests itself in Boulder, Colorado. The city of Boulder very tightly restricts housing development by limiting housing permits to 400 per year, which is 1 percent of the city's total housing stock. Additionally, the city has purchased land outside of the city limits, designated for permanent, green open space. This 400-unit cap seriously hinders population growth in the city.

This shortage of housing has several repercussions. First, it increases housing prices. Also, because Boulder restricts housing development more than it does commercial development, the number of available workers in Boulder grows faster than the housing stock. This results in many workers who commute from beyond the city limits.

Controversy

A common critique of green development is that it negatively affects the way real estate developers do business, as it can increase cost and create delay. For example, becoming LEED-certified can contribute to additional costs. This includes additional building design and construction fees, interior design and construction fees, building operations and maintenance fees, neighborhood development fees, home and campus fees, and volume program fees.

Additionally, green development has been critiqued on a residential level. High-performance homes have proven to save energy in the long run, but they rapidly increase up-front capital costs, via tankless water heaters, radiant barriers and reflective insulation systems, and high efficiency air conditioning systems. Also, developers are often unable to develop on certain portions of land due to conservation easements. These easements are purchased by governments or non-governmental organizations, in order to "preserve land in its natural, scenic, agricultural, historical, forested, or open-space condition."

Caveat emptor

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Caveat_emptor

Caveat emptor (/ˈɛmptɔːr/; from caveat, "may he/she beware", a subjunctive form of cavēre, "to beware" + ēmptor, "buyer") is Latin for "Let the buyer beware". It has become a proverb in English. Generally, caveat emptor is the contract law principle that controls the sale of real property after the date of closing, but may also apply to sales of other goods. The phrase caveat emptor and its use as a disclaimer of warranties arises from the fact that buyers typically have less information than the seller about the good or service they are purchasing. This quality of the situation is known as 'information asymmetry'. Defects in the good or service may be hidden from the buyer, and only known to the seller.

It is a short form of Caveat emptor, quia ignorare non debuit quod jus alienum emit ("Let a purchaser beware, for he ought not to be ignorant of the nature of the property which he is buying from another party.") I.e. the buyer should assure himself that the product is good and that the seller had the right to sell it, as opposed to receiving stolen property.

A common way that information asymmetry between seller and buyer has been addressed is through a legally binding warranty, such as a guarantee of satisfaction.

Explanation

Under the principle of caveat emptor, the buyer could not recover damages from the seller for defects on the property that rendered the property unfit for ordinary purposes. The only exception was if the seller actively concealed latent defects or otherwise made material misrepresentations amounting to fraud.

Before statutory law, the buyer had no express warranty ensuring the quality of goods. In the UK, common law requires that goods must be "fit for the particular purpose" and of "merchantable quality", per Section 15 of the Sale of Goods Act but this implied warranty can be difficult to enforce and may not apply to all products. Hence, buyers are still advised to be cautious.

By country

United States

Real estate

The modern trend in the U.S. is that the implied warranty of fitness for a particular purpose applies in the real-estate context to only the sale of new residential housing by a builder-seller and that the caveat emptor rule applies to all other real-estate sale situations (e.g. homeowner to buyer). Other jurisdictions have provisions similar to this.

Chattel property

Under Article 2 of the Uniform Commercial Code, the sale of new goods is governed by the "perfect-tender" rule unless the parties to the sale expressly agree in advance to terms equivalent to caveat emptor (such as describing the goods as sold "as is" and/or "with all faults") or other limitations such as the below-discussed limitations on remedies. The perfect-tender rule states that if a buyer who inspects new goods with reasonable promptness discovers them to be "nonconforming" (failing to meet the description provided or any other standards reasonably expectable by a buyer in his/her situation) and does not use the goods or take other actions constituting acceptance of them, the buyer may promptly return or refuse to accept ("reject") them and demand that the defect be remedied ("cured"). When goods fitting the same description and expectations are available for sale (e.g., when the vendor has other instances of the same mass-produced merchandise in stock inventory), either the vendor or the buyer may insist on an "even exchange" for other, "conforming" instances of the product. When conforming goods are not available in stock but are available for the dealer to purchase (usually on the open or "spot" market), the buyer may require that the seller obtain the goods elsewhere, even at a higher price, with the seller having to incur a loss equivalent to the price difference. If the vendor still does not or cannot provide the goods and the dispute proceeds to litigation (as opposed to renegotiation or settlement), then as in all cases of vendor breaches of contract, the buyer may recover only the damages that s/he would have suffered had s/he taken all feasible steps to minimize ("mitigate") his/her damages suffered.

As a default rule, the perfect-tender rule may be "contracted around" in ways that specify or limit a buyer's remedies (and that accordingly reduce the market price that rational buyers are willing to pay for the goods). In many cases, the vendor will not provide a refund but will provide store credit. In the cases of software, movies, and other copyrighted material, many vendors will offer only a direct exchange for another copy of the same title, with the effect that the initial transfer or license of intellectual-property rights is preserved. Most stores require proof of purchase and impose time limits on exchanges or refunds. Some larger chain stores, such as F.Y.E., Staples, Target, or Walmart, will, however, do exchanges or refunds at any time, with or without proof of purchase, although they usually require a form of picture identification and place per-transaction and/or per-person quantity or dollar limitations on such returns.

United Kingdom

In the UK, consumer law has moved away from the caveat emptor model, with laws passed that have enhanced consumer rights and allow greater leeway to return goods that do not meet legal standards of acceptance. Consumer purchases are regulated by the Consumer Rights Act 2015, whilst business-to-business purchases are regulated by the Sale of Goods Act 1979.

In the UK, consumers have the right to a full refund for faulty goods. However, traditionally, many retailers allow customers to return goods within a specified period (typically two weeks to two months) for a full refund or an exchange, even if there is no fault with the product. Exceptions may apply for goods sold as damaged or to clear.

Goods bought through "distance selling," for example online or by phone, also have a statutory "cooling off" period of fourteen calendar days during which the purchase contract can be cancelled and treated as if not done.

Although no longer applied in consumer law, the principle of caveat emptor is generally held to apply to transactions between businesses unless it can be shown that the seller had a clear information advantage over the buyer that could not have been removed by carrying out reasonable due diligence.

Variations

Caveat venditor

Caveat venditor is Latin for "let the seller beware."

In the landmark case of MacPherson v. Buick Motor Co. (1916), New York Court Appeals Judge Benjamin N. Cardozo established that privity of duty is no longer required in regard to a lawsuit for product liability against the seller. This case is widely regarded as the origin of caveat venditor as it pertains to modern tort law in US.

Caveat lector

Caveat lector is a Latin phrase meaning "let the reader beware". It means that when reading something, the reader should take careful note of the contents, and undertake due diligence on whether the contents are accurate, relevant, reliable and so forth.

Caveat auditor

Another variant is Caveat Auditor, or "let the listener beware", where caution is urged regarding all messages, in particular spoken messages, such as a radio advertisement.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...