One of the seven Millennium Prize Problems stated by the Clay Mathematics Institute is the P vs. NP problem, which asks: is P = NP? What are P and NP exactly? In this post, I’ll go over some basic computational complexity classes: P, NP, co-NP, PSPACE, and known relationships between them, as well as intuition behind each class, without getting too technical. After reading, you should understand what the P vs. NP problem and you should get an idea of how the basic complexity classes are interrelated.

To start, we need to agree on a model of Turing machines. Usually, theorists work with a single-tape model. Actually, any Turing machine can simulate any other with at most polynomial time overhead. We also distinguish between **deterministic** Turing machines and **non-deterministic** Turing machines. The difference is that from each configuration, a non-deterministic Turing machine can go on to more than one possible configuration. The precise formulation of this is that the transition function of a non-deterministic Turing machine maps configurations to sets of configurations rather than single configurations. Furthermore, we can assume that every configuration leads to two possible configurations. This doesn’t really correspond to any real-world physical model, it’s just a theoretical construction. Moreover, a non-deterministic machine accepts if and only if *at least one* of its branches accepts. It turns out that non-determinism doesn’t give Turing machines any power to recognize more languages. This is because any non-deterministic Turing machine can be simulated by a Turing machine (with potentially exponential time overhead) by trying all branches of the non-deterministic machine “in parallel” by using breadth first search, for example.

Now I introduce some definitions and notation. A -time (or -space) Turing machine is one that uses at most steps (or tape cells) on any input of length . Define to be the set of languages decidable by an -time deterministic Turing machine, and to be the set of languages decidable by an -time non-deterministic Turing machine. Define and analogously. Now I can define P, NP, co-NP, PSPACE, and more:

Furthermore, for any class C, we define co-C to be the set of languages whose complements are in C, which gives us definitions for co-P, co-NP, co-PSPACE, and co-NPSPACE. So far, I’ve defined a lot of complexity classes, but it turns out a lot of these are the same.

First of all, let’s get some intuition for these. P is the set of problems that can be solved by polynomial-time deterministic algorithms. NP is the set of problems that can be solved by polynomial-time non-deterministic algorithms. What does this actually mean? A non-deterministic algorithm can “guess” the right step to take next. From every configuration, there are 2 configurations to go to, and as long as one path of configurations leads to an accept state, the algorithm accepts. What this means is that if we get lucky, if the algorithm guesses the “correct” configuration to go to every step, then it can reach the answer in polynomial time. An equivalent definition of NP is the set of problems for which a solution can be verified in polynomial time. Examples will come soon. PSPACE is the set of problems that can be solved using polynomial space. EXPTIME is the set of problems that can be solved using exponentially long time.

Why do we lump all the polynomials together? Why not distinguish between linear time problems, quadratic time, etc.? Well, like I mentioned at the beginning, we assumed one particular model of the Turing machine. However, if we change the Turing machine, our run-time and space may change by a polynomial amount. Therefore what’s linear on one machine could be quadratic on another. We want robust classes — classes that are the same no matter what model of Turing machine we choose, which is why we lump all the polynomials together.

Some relationships follow pretty straightforwardly from the definitions. For example, P is contained in NP, since any deterministic TM can be simulated by a non-deterministic one with basically no overhead (you can just view a DTM as a NDTM with one branch at each configuration). Also, PSPACE is contained in EXPTIME. The idea is that if the tape of a TM is bounded by some number, then the number of possible configurations it has is at most exponential in the space bound. The exponential part comes from counting the number of possible strings you could have on the tape (number of possible letters in the alphabet raised to the power of the number of cells). Anyway, if a DTM encounters the same configuration twice, then it must have entered an infinite loop. Therefore, any DTM that solves a PSPACE problem must halt, and hence can run for at most exponentially many steps, since that’s an upper bound on the number of possible configurations.

Moreover, for deterministic classes C like P, PSPACE, and EXPTIME, in fact C = co-C. This is because you can just construct a DTM which runs the original DTM and then spits out the opposite answer. If is a language and decides , then construct to spit out the opposite answer of . Now, decides , since it accepts if and only if rejects if and only if .

Note, however, that this trick doesn’t work for non-deterministic machines, and so we can’t deduce that NP is the same as co-NP. In fact, their relationship is still not known! The reason this trick doesn’t work is that a non-deterministic machine might have *some* accepting branches and *some* rejecting branches for certain inputs. In this case, constructing a machine that reverses the answer will have *some* rejecting branches, corresponding to the accepting branches of the original machine, and *some* accepting branches corresponding to the rejecting branches of the original machine. In other words, on a given input, if the original machine accepts, the reversed-answer machine might ALSO accept, which is why the trick doesn’t work.

Another neat fact, known as **Savitch’s Theorem**, is that non-deterministic Turing machines can be simulated by deterministic Turing machines with at most quadratic space overhead. This implies that PSPACE = NPSPACE! Finally, the **time hierarchy theorem** tells us that allowing a DTM more time (by more than a log factor) the set of problems it decides is strictly larger. This immediately tells us that P is strictly contained in EXPTIME.

To deduce that NP is contained in PSPACE, we use the concept of completeness. For a given complexity class C, a problem is C-hard if, intuitively, it’s at least as hard as every problem in C. Technically, a language L is C-hard if every language in C is **polynomial-reducible** to L. A is polynomial-reducible to B, written , if there exists a polynomial-time computable function such that for all . Intuitively, this means that B is as hard as A, since if we have a polynomial time algorithm for B, we can use it to solve A by first transforming the input using (in polynomial time), then feeding the result into our algorithm for B and returning the answer. A problem is C-**complete** if it is C-hard *and* it is itself in C. This intuitively means it is one of the hardest problems in C, and any polynomial time algorithm for a C-complete language gives us a polynomial time algorithm for *every* language in C.

The “canonical” NP-complete problem is SAT, the language of satisfiable boolean formulas. A boolean formula is just an expression involving variables and the boolean operators OR, AND, and NOT, and it is satisfiable if there is an assignment of truth values to the variables which makes the entire expression evaluate to TRUE. This is in NP, since given an assignment of truth values, we can verify that it satisfies the boolean formula in polynomial time why simply plugging in the values. This problem is also NP-complete, and the proof is way too technical to write out here, so I’ll just give the main idea. To reduce any NP problem L to SAT, you can construct a boolean formula which “simulates” the machine for L, in a manner reminiscent of how modern electronic circuits simulate Turing machines (and are really just evaluating boolean expressions). The boolean formula constructed will be satisfiable if and only if the simulated machine accepts the input.

It turns out SAT is in PSPACE. You can solve it using polynomial space (linear, in fact) by successively plugging in possible truth assignments. If there are variables, you just try all possible assignments, but at any given time you only need to keep track of values. But since SAT is NP-complete, the entire set NP is contained in PSPACE. In retrospect, this is not surprising at all. Intuitively, space should be more powerful than time, since we can re-use space but we can’t re-use time. So, in summary, we have

and moreover we know that **at least one** of the containments is strict, but we don’t know which! Most theoreticians believe that every one of them is strict.

The interesting about NP and co-NP is that intuitively they should be different, but we really don’t know for sure. The complement of SAT, the language of unsatisfiable boolean formulas, is co-NP-complete. We don’t know if it’s in NP. While it’s easy to verify that a formula is satisfiable by exhibiting a satisfying assignment, there seems to be no easy way of verifying that a formula is *not* satisfiable. A problem that is in both NP and co-NP is PRIMES, the language of all prime numbers. It’s in co-NP because it’s easy to verify that a number is *not* prime by simply exhibiting a proper divisor.

I hear most theoreticians believe that P is not equal to NP. But what if P = NP? What would happen? Well, for one, it would mean it takes barely any more time to find a solution than it does to check one. We would be able to automate many processes so far believed to be difficult. Pretty much every cryptography system would fall apart since it would not take much longer to crack a key than it would to create it. A lot of good would come along with some bad. But this speculation is most likely pointless. Although the relationship hasn’t been proven, *intuitively* it seems clear that the two sets should not be equal, the same way the Jordan curve theorem was intuitively clear but was not proven until the early 1900s.

This post is already running much longer than I planned, so I’m going to save discussion of the interesting-ness of PSPACE for a follow-up post. I find PSPACE particularly interesting because it naturally captures the difficulty of many two-player games.

As always, leave a comment if you have questions or if you catch a mistake. Or if you disagree with me about P vs. NP.

-Alan

hi, i have some another question. let’s say we have two languages, L_1 and L_2 which are in NP n co-NP (intersection of NP and co-NP classes – like PRIMES languages you have already shown). Ok, so you have two languages. These languages consists of some words x. Now, let’s say we have language L_3 = {x | x is a word in L_1 OR L_2}. How can you show that L_3 is also in NP n co-NP?

By:

robon March 5, 2012at 11:22 AM

Great post! I’ve spent a while looking for a clear intro to all this!

By:

Akroyon May 13, 2012at 8:27 PM

[…] Concrete Nonsense […]

By:

Confused with the definition of NP - MathHubon May 13, 2016at 3:10 PM