Search This Blog

Saturday, December 16, 2023

Greatest common divisor


From Wikipedia, the free encyclopedia

In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted . For example, the GCD of 8 and 12 is 4, that is, .

In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor (hcf), etc. Historically, other names for the same concept have included greatest common measure.

This notion can be extended to polynomials (see Polynomial greatest common divisor) and other commutative rings (see § In commutative rings below).

Overview

Definition

The greatest common divisor (GCD) of integers a and b, at least one of which is nonzero, is the greatest positive integer d such that d is a divisor of both a and b; that is, there are integers e and f such that a = de and b = df, and d is the largest such integer. The GCD of a and b is generally denoted gcd(a, b).

When one of a and b is zero, the GCD is the absolute value of the nonzero integer: gcd(a, 0) = gcd(0, a) = |a|. This case is important as the terminating step of the Euclidean algorithm.

The above definition is unsuitable for defining gcd(0, 0), since there is no greatest integer n such that 0 × n = 0. However, zero is its own greatest divisor if greatest is understood in the context of the divisibility relation, so gcd(0, 0) is commonly defined as 0. This preserves the usual identities for GCD, and in particular Bézout's identity, namely that gcd(a, b) generates the same ideal as {a, b}. This convention is followed by many computer algebra systems.[12] Nonetheless, some authors leave gcd(0, 0) undefined.

The GCD of a and b is their greatest positive common divisor in the preorder relation of divisibility. This means that the common divisors of a and b are exactly the divisors of their GCD. This is commonly proved by using either Euclid's lemma, the fundamental theorem of arithmetic, or the Euclidean algorithm. This is the meaning of "greatest" that is used for the generalizations of the concept of GCD.

Example

The number 54 can be expressed as a product of two integers in several different ways:

Thus the complete list of divisors of 54 is . Similarly, the divisors of 24 are . The numbers that these two lists have in common are the common divisors of 54 and 24, that is,

Of these, the greatest is 6, so it is the greatest common divisor:

Computing all divisors of the two numbers in this way is usually not efficient, especially for large numbers that have many divisors. Much more efficient methods are described in § Calculation.

Coprime numbers

Two numbers are called relatively prime, or coprime, if their greatest common divisor equals 1. For example, 9 and 28 are coprime.

A geometric view

"Tall, slender rectangle divided into a grid of squares. The rectangle is two squares wide and five squares tall."
A 24-by-60 rectangle is covered with ten 12-by-12 square tiles, where 12 is the GCD of 24 and 60. More generally, an a-by-b rectangle can be covered with square tiles of side length c only if c is a common divisor of a and b.

For example, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can thus be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).

Applications

Reducing fractions

The greatest common divisor is useful for reducing fractions to the lowest terms. For example, gcd(42, 56) = 14, therefore,

Least common multiple

The least common multiple of two integers that are not both zero can be computed from their greatest common divisor, by using the relation

Calculation

Using prime factorizations

Greatest common divisors can be computed by determining the prime factorizations of the two numbers and comparing factors. For example, to compute gcd(48, 180), we find the prime factorizations 48 = 24 · 31 and 180 = 22 · 32 · 51; the GCD is then 2min(4,2) · 3min(1,2) · 5min(0,1) = 22 · 31 · 50 = 12 The corresponding LCM is then 2max(4,2) · 3max(1,2) · 5max(0,1) = 24 · 32 · 51 = 720.

In practice, this method is only feasible for small numbers, as computing prime factorizations takes too long.

Euclid's algorithm

The method introduced by Euclid for computing greatest common divisors is based on the fact that, given two positive integers a and b such that a > b, the common divisors of a and b are the same as the common divisors of ab and b.

So, Euclid's method for computing the greatest common divisor of two positive integers consists of replacing the larger number by the difference of the numbers, and repeating this until the two numbers are equal: that is their greatest common divisor.

For example, to compute gcd(48,18), one proceeds as follows:

So gcd(48, 18) = 6.

This method can be very slow if one number is much larger than the other. So, the variant that follows is generally preferred.

Euclidean algorithm

Animation showing an application of the Euclidean algorithm to find the greatest common divisor of 62 and 36, which is 2.

A more efficient method is the Euclidean algorithm, a variant in which the difference of the two numbers a and b is replaced by the remainder of the Euclidean division (also called division with remainder) of a by b.

Denoting this remainder as a mod b, the algorithm replaces (a, b) by (b, a mod b) repeatedly until the pair is (d, 0), where d is the greatest common divisor.

For example, to compute gcd(48,18), the computation is as follows:

This again gives gcd(48, 18) = 6.

Lehmer's GCD algorithm

Lehmer's algorithm is based on the observation that the initial quotients produced by Euclid's algorithm can be determined based on only the first few digits; this is useful for numbers that are larger than a computer word. In essence, one extracts initial digits, typically forming one or two computer words, and runs Euclid's algorithms on these smaller numbers, as long as it is guaranteed that the quotients are the same with those that would be obtained with the original numbers. The quotients are collected into a small 2-by-2 transformation matrix (a matrix of single-word integers) to reduce the original numbers. This process is repeated until numbers are small enough that the binary algorithm (see below) is more efficient.

This algorithm improves speed, because it reduces the number of operations on very large numbers, and can use hardware arithmetic for most operations. In fact, most of the quotients are very small, so a fair number of steps of the Euclidean algorithm can be collected in a 2-by-2 matrix of single-word integers. When Lehmer's algorithm encounters a quotient that is too large, it must fall back to one iteration of Euclidean algorithm, with a Euclidean division of large numbers.

Binary GCD algorithm

The binary GCD algorithm uses only subtraction and division by 2. The method is as follows: Let a and b be the two non-negative integers. Let the integer d be 0. There are five possibilities:

  • a = b.

As gcd(a, a) = a, the desired GCD is a × 2d (as a and b are changed in the other cases, and d records the number of times that a and b have been both divided by 2 in the next step, the GCD of the initial pair is the product of a and 2d).

  • Both a and b are even.

Then 2 is a common divisor. Divide both a and b by 2, increment d by 1 to record the number of times 2 is a common divisor and continue.

  • a is even and b is odd.

Then 2 is not a common divisor. Divide a by 2 and continue.

  • a is odd and b is even.

Then 2 is not a common divisor. Divide b by 2 and continue.

  • Both a and b are odd.

As gcd(a,b) = gcd(b,a), if a < b then exchange a and b. The number c = ab is positive and smaller than a. Any number that divides a and b must also divide c so every common divisor of a and b is also a common divisor of b and c. Similarly, a = b + c and every common divisor of b and c is also a common divisor of a and b. So the two pairs (a, b) and (b, c) have the same common divisors, and thus gcd(a,b) = gcd(b,c). Moreover, as a and b are both odd, c is even, the process can be continued with the pair (a, b) replaced by the smaller numbers (c/2, b) without changing the GCD.

Each of the above steps reduces at least one of a and b while leaving them non-negative and so can only be repeated a finite number of times. Thus eventually the process results in a = b, the stopping case. Then the GCD is a × 2d.

Example: (a, b, d) = (48, 18, 0) → (24, 9, 1) → (12, 9, 1) → (6, 9, 1) → (3, 9, 1) → (3, 3, 1) ; the original GCD is thus the product 6 of 2d = 21 and a = b = 3.

The binary GCD algorithm is particularly easy to implement on binary computers. Its computational complexity is

The computational complexity is usually given in terms of the length n of the input. Here, this length is and the complexity is thus

.

Other methods

or Thomae's function. Hatching at bottom indicates ellipses (i.e. omission of dots due to the extremely high density).

If a and b are both nonzero, the greatest common divisor of a and b can be computed by using least common multiple (LCM) of a and b:

,

but more commonly the LCM is computed from the GCD.

Using Thomae's function f,

which generalizes to a and b rational numbers or commensurable real numbers.

Keith Slavin has shown that for odd a ≥ 1:

which is a function that can be evaluated for complex b. Wolfgang Schramm has shown that

is an entire function in the variable b for all positive integers a where cd(k) is Ramanujan's sum.

Complexity

The computational complexity of the computation of greatest common divisors has been widely studied. If one uses the Euclidean algorithm and the elementary algorithms for multiplication and division, the computation of the greatest common divisor of two integers of at most n bits is This means that the computation of greatest common divisor has, up to a constant factor, the same complexity as the multiplication.

However, if a fast multiplication algorithm is used, one may modify the Euclidean algorithm for improving the complexity, but the computation of a greatest common divisor becomes slower than the multiplication. More precisely, if the multiplication of two integers of n bits takes a time of T(n), then the fastest known algorithm for greatest common divisor has a complexity This implies that the fastest known algorithm has a complexity of

Previous complexities are valid for the usual models of computation, specifically multitape Turing machines and random-access machines.

The computation of the greatest common divisors belongs thus to the class of problems solvable in quasilinear time. A fortiori, the corresponding decision problem belongs to the class P of problems solvable in polynomial time. The GCD problem is not known to be in NC, and so there is no known way to parallelize it efficiently; nor is it known to be P-complete, which would imply that it is unlikely to be possible to efficiently parallelize GCD computation. Shallcross et al. showed that a related problem (EUGCD, determining the remainder sequence arising during the Euclidean algorithm) is NC-equivalent to the problem of integer linear programming with two variables; if either problem is in NC or is P-complete, the other is as well.[19] Since NC contains NL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines.

Although the problem is not known to be in NC, parallel algorithms asymptotically faster than the Euclidean algorithm exist; the fastest known deterministic algorithm is by Chor and Goldreich, which (in the CRCW-PRAM model) can solve the problem in O(n/log n) time with n1+ε processors. Randomized algorithms can solve the problem in O((log n)2) time on processors (this is superpolynomial).

Properties

  • For positive integers a, gcd(a, a) = a.
  • Every common divisor of a and b is a divisor of gcd(a, b).
  • gcd(a, b), where a and b are not both zero, may be defined alternatively and equivalently as the smallest positive integer d which can be written in the form d = ap + bq, where p and q are integers. This expression is called Bézout's identity. Numbers p and q like this can be computed with the extended Euclidean algorithm.
  • gcd(a, 0) = |a|, for a ≠ 0, since any number is a divisor of 0, and the greatest divisor of a is |a|.[2][5] This is usually used as the base case in the Euclidean algorithm.
  • If a divides the product bc, and gcd(a, b) = d, then a/d divides c.
  • If m is a positive integer, then gcd(ma, mb) = m⋅gcd(a, b).
  • If m is any integer, then gcd(a + mb, b) = gcd(a, b). Equivalently, gcd(a mod b,b) = gcd(a,b).
  • If m is a positive common divisor of a and b, then gcd(a/m, b/m) = gcd(a, b)/m.
  • The GCD is a commutative function: gcd(a, b) = gcd(b, a).
  • The GCD is an associative function: gcd(a, gcd(b, c)) = gcd(gcd(a, b), c). Thus gcd(a, b, c, ...) can be used to denote the GCD of multiple arguments.
  • The GCD is a multiplicative function in the following sense: if a1 and a2 are relatively prime, then gcd(a1a2, b) = gcd(a1, b)⋅gcd(a2, b).
  • gcd(a, b) is closely related to the least common multiple lcm(a, b): we have
    gcd(a, b)⋅lcm(a, b) = |ab|.
This formula is often used to compute least common multiples: one first computes the GCD with Euclid's algorithm and then divides the product of the given numbers by their GCD.
  • The following versions of distributivity hold true:
    gcd(a, lcm(b, c)) = lcm(gcd(a, b), gcd(a, c))
    lcm(a, gcd(b, c)) = gcd(lcm(a, b), lcm(a, c)).
  • If we have the unique prime factorizations of a = p1e1 p2e2 ⋅⋅⋅ pmem and b = p1f1 p2f2 ⋅⋅⋅ pmfm where ei ≥ 0 and fi ≥ 0, then the GCD of a and b is
    gcd(a,b) = p1min(e1,f1) p2min(e2,f2) ⋅⋅⋅ pmmin(em,fm).
  • It is sometimes useful to define gcd(0, 0) = 0 and lcm(0, 0) = 0 because then the natural numbers become a complete distributive lattice with GCD as meet and LCM as join operation. This extension of the definition is also compatible with the generalization for commutative rings given below.
  • In a Cartesian coordinate system, gcd(a, b) can be interpreted as the number of segments between points with integral coordinates on the straight line segment joining the points (0, 0) and (a, b).
  • For non-negative integers a and b, where a and b are not both zero, provable by considering the Euclidean algorithm in base n:
    gcd(na − 1, nb − 1) = ngcd(a,b) − 1.
  • An identity involving Euler's totient function:
  • where is the p-adic valuation.

Probabilities and expected value

In 1972, James E. Nymann showed that k integers, chosen independently and uniformly from {1, ..., n}, are coprime with probability 1/ζ(k) as n goes to infinity, where ζ refers to the Riemann zeta function. (See coprime for a derivation.) This result was extended in 1987 to show that the probability that k random integers have greatest common divisor d is d−k/ζ(k).

Using this information, the expected value of the greatest common divisor function can be seen (informally) to not exist when k = 2. In this case the probability that the GCD equals d is d−2/ζ(2), and since ζ(2) = π2/6 we have

This last summation is the harmonic series, which diverges. However, when k ≥ 3, the expected value is well-defined, and by the above argument, it is

For k = 3, this is approximately equal to 1.3684. For k = 4, it is approximately 1.1106.

In commutative rings

The notion of greatest common divisor can more generally be defined for elements of an arbitrary commutative ring, although in general there need not exist one for every pair of elements.

If R is a commutative ring, and a and b are in R, then an element d of R is called a common divisor of a and b if it divides both a and b (that is, if there are elements x and y in R such that d·x = a and d·y = b). If d is a common divisor of a and b, and every common divisor of a and b divides d, then d is called a greatest common divisor of a and b.

With this definition, two elements a and b may very well have several greatest common divisors, or none at all. If R is an integral domain then any two GCD's of a and b must be associate elements, since by definition either one must divide the other; indeed if a GCD exists, any one of its associates is a GCD as well. Existence of a GCD is not assured in arbitrary integral domains. However, if R is a unique factorization domain, then any two elements have a GCD, and more generally this is true in GCD domains. If R is a Euclidean domain in which euclidean division is given algorithmically (as is the case for instance when R = F[X] where F is a field, or when R is the ring of Gaussian integers), then greatest common divisors can be computed using a form of the Euclidean algorithm based on the division procedure.

The following is an example of an integral domain with two elements that do not have a GCD:

The elements 2 and 1 + −3 are two maximal common divisors (that is, any common divisor which is a multiple of 2 is associated to 2, the same holds for 1 + −3, but they are not associated, so there is no greatest common divisor of a and b.

Corresponding to the Bézout property we may, in any commutative ring, consider the collection of elements of the form pa + qb, where p and q range over the ring. This is the ideal generated by a and b, and is denoted simply (ab). In a ring all of whose ideals are principal (a principal ideal domain or PID), this ideal will be identical with the set of multiples of some ring element d; then this d is a greatest common divisor of a and b. But the ideal (ab) can be useful even when there is no greatest common divisor of a and b. (Indeed, Ernst Kummer used this ideal as a replacement for a GCD in his treatment of Fermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, or ideal, ring element d, whence the ring-theoretic term.)

Interactive proof system

From Wikipedia, the free encyclopedia
General representation of an interactive proof protocol.

In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover possesses unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.

All interactive proof systems have two requirements:

  • Completeness: if the statement is true, the honest prover (that is, one following the protocol properly) can convince the honest verifier that it is indeed true.
  • Soundness: if the statement is false, no prover, even if it doesn't follow the protocol, can convince the honest verifier that it is true, except with some small probability.

The specific nature of the system, and so the complexity class of languages it can recognize, depends on what sort of bounds are put on the verifier, as well as what abilities it is given—for example, most interactive proof systems depend critically on the verifier's ability to make random choices. It also depends on the nature of the messages exchanged—how many and what they can contain. Interactive proof systems have been found to have some important implications for traditional complexity classes defined using only one machine. The main complexity classes describing interactive proof systems are AM and IP.

Background

Every interactive proof system defines a formal language of strings . Soundness of the proof system refers to the property that no prover can make the verifier accept for the wrong statement except with some small probability. The upper bound of this probability is referred to as the soundness error of a proof system. More formally, for every prover , and every :

for some . As long as the soundness error is bounded by a polynomial fraction of the potential running time of the verifier (i.e. ), it is always possible to amplify soundness until the soundness error becomes negligible function relative to the running time of the verifier. This is achieved by repeating the proof and accepting only if all proofs verify. After repetitions, a soundness error will be reduced to .

Classes of interactive proofs

NP

The complexity class NP may be viewed as a very simple proof system. In this system, the verifier is a deterministic, polynomial-time machine (a P machine). The protocol is:

  • The prover looks at the input and computes the solution using its unlimited power and returns a polynomial-size proof certificate.
  • The verifier verifies that the certificate is valid in deterministic polynomial time. If it is valid, it accepts; otherwise, it rejects.

In the case where a valid proof certificate exists, the prover is always able to make the verifier accept by giving it that certificate. In the case where there is no valid proof certificate, however, the input is not in the language, and no prover, however malicious it is, can convince the verifier otherwise, because any proof certificate will be rejected.

Arthur–Merlin and Merlin–Arthur protocols

Although NP may be viewed as using interaction, it wasn't until 1985 that the concept of computation through interaction was conceived (in the context of complexity theory) by two independent groups of researchers. One approach, by László Babai, who published "Trading group theory for randomness", defined the Arthur–Merlin (AM) class hierarchy. In this presentation, Arthur (the verifier) is a probabilistic, polynomial-time machine, while Merlin (the prover) has unbounded resources.

The class MA in particular is a simple generalization of the NP interaction above in which the verifier is probabilistic instead of deterministic. Also, instead of requiring that the verifier always accept valid certificates and reject invalid certificates, it is more lenient:

  • Completeness: if the string is in the language, the prover must be able to give a certificate such that the verifier will accept with probability at least 2/3 (depending on the verifier's random choices).
  • Soundness: if the string is not in the language, no prover, however malicious, will be able to convince the verifier to accept the string with probability exceeding 1/3.

This machine is potentially more powerful than an ordinary NP interaction protocol, and the certificates are no less practical to verify, since BPP algorithms are considered as abstracting practical computation (see BPP).

Public coin protocol versus private coin protocol

In a public coin protocol, the random choices made by the verifier are made public. They remain private in a private coin protocol.

In the same conference where Babai defined his proof system for MA, Shafi Goldwasser, Silvio Micali and Charles Rackoff published a paper defining the interactive proof system IP[f(n)]. This has the same machines as the MA protocol, except that f(n) rounds are allowed for an input of size n. In each round, the verifier performs computation and passes a message to the prover, and the prover performs computation and passes information back to the verifier. At the end the verifier must make its decision. For example, in an IP[3] protocol, the sequence would be VPVPVPV, where V is a verifier turn and P is a prover turn.

In Arthur–Merlin protocols, Babai defined a similar class AM[f(n)] which allowed f(n) rounds, but he put one extra condition on the machine: the verifier must show the prover all the random bits it uses in its computation. The result is that the verifier cannot "hide" anything from the prover, because the prover is powerful enough to simulate everything the verifier does if it knows what random bits it used. This is called a public coin protocol, because the random bits ("coin flips") are visible to both machines. The IP approach is called a private coin protocol by contrast.

The essential problem with public coins is that if the prover wishes to maliciously convince the verifier to accept a string which is not in the language, it seems like the verifier might be able to thwart its plans if it can hide its internal state from it. This was a primary motivation in defining the IP proof systems.

In 1986, Goldwasser and Sipser showed, perhaps surprisingly, that the verifier's ability to hide coin flips from the prover does it little good after all, in that an Arthur–Merlin public coin protocol with only two more rounds can recognize all the same languages. The result is that public-coin and private-coin protocols are roughly equivalent. In fact, as Babai shows in 1988, AM[k]=AM for all constant k, so the IP[k] have no advantage over AM.

To demonstrate the power of these classes, consider the graph isomorphism problem, the problem of determining whether it is possible to permute the vertices of one graph so that it is identical to another graph. This problem is in NP, since the proof certificate is the permutation which makes the graphs equal. It turns out that the complement of the graph isomorphism problem, a co-NP problem not known to be in NP, has an AM algorithm and the best way to see it is via a private coins algorithm.

IP

Private coins may not be helpful, but more rounds of interaction are helpful. If we allow the probabilistic verifier machine and the all-powerful prover to interact for a polynomial number of rounds, we get the class of problems called IP. In 1992, Adi Shamir revealed in one of the central results of complexity theory that IP equals PSPACE, the class of problems solvable by an ordinary deterministic Turing machine in polynomial space.

QIP

If we allow the elements of the system to use quantum computation, the system is called a quantum interactive proof system, and the corresponding complexity class is called QIP. A series of results culminated in a 2010 breakthrough that QIP = PSPACE.

Zero knowledge

Not only can interactive proof systems solve problems not believed to be in NP, but under assumptions about the existence of one-way functions, a prover can convince the verifier of the solution without ever giving the verifier information about the solution. This is important when the verifier cannot be trusted with the full solution. At first it seems impossible that the verifier could be convinced that there is a solution when the verifier has not seen a certificate, but such proofs, known as zero-knowledge proofs are in fact believed to exist for all problems in NP and are valuable in cryptography. Zero-knowledge proofs were first mentioned in the original 1985 paper on IP by Goldwasser, Micali and Rackoff for specific number theoretic languages. The extent of their power was however shown by Oded Goldreich, Silvio Micali and Avi Wigderson. for all of NP, and this was first extended by Russell Impagliazzo and Moti Yung to all IP.

MIP

One goal of IP's designers was to create the most powerful possible interactive proof system, and at first it seems like it cannot be made more powerful without making the verifier more powerful and so impractical. Goldwasser et al. overcame this in their 1988 "Multi prover interactive proofs: How to remove intractability assumptions", which defines a variant of IP called MIP in which there are two independent provers. The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier into accepting a string not in the language if there is another prover it can double-check with.

In fact, this is so helpful that Babai, Fortnow, and Lund were able to show that MIP = NEXPTIME, the class of all problems solvable by a nondeterministic machine in exponential time, a very large class. NEXPTIME contains PSPACE, and is believed to strictly contain PSPACE. Adding a constant number of additional provers beyond two does not enable recognition of any more languages. This result paved the way for the celebrated PCP theorem, which can be considered to be a "scaled-down" version of this theorem.

MIP also has the helpful property that zero-knowledge proofs for every language in NP can be described without the assumption of one-way functions that IP must make. This has bearing on the design of provably unbreakable cryptographic algorithms. Moreover, a MIP protocol can recognize all languages in IP in only a constant number of rounds, and if a third prover is added, it can recognize all languages in NEXPTIME in a constant number of rounds, showing again its power over IP.

It is known that for any constant k, a MIP system with k provers and polynomially many rounds can be turned into an equivalent system with only 2 provers, and a constant number of rounds.

PCP

While the designers of IP considered generalizations of Babai's interactive proof systems, others considered restrictions. A very useful interactive proof system is PCP(f(n), g(n)), which is a restriction of MA where Arthur can only use f(n) random bits and can only examine g(n) bits of the proof certificate sent by Merlin (essentially using random access).

There are a number of easy-to-prove results about various PCP classes. , the class of polynomial-time machines with no randomness but access to a certificate, is just NP. , the class of polynomial-time machines with access to polynomially many random bits is co-RP. Arora and Safra's first major result was that PCP(log, log) = NP; put another way, if the verifier in the NP protocol is constrained to choose only bits of the proof certificate to look at, this won't make any difference as long as it has random bits to use.

Furthermore, the PCP theorem asserts that the number of proof accesses can be brought all the way down to a constant. That is, . They used this valuable characterization of NP to prove that approximation algorithms do not exist for the optimization versions of certain NP-complete problems unless P = NP. Such problems are now studied in the field known as hardness of approximation.

Politics of Europe

From Wikipedia, the free encyclopedia ...