CSCI 341 Theory of Computation

Fall 2025, with Schmid
← 4.0 Timing Turing Machines4.2 Nondeterminism And Complexity →

Computable and Polynomial Time Reductions

Recall that our main technique for proving that a language is undecidable is to reduce a problem we already know is undecidable to it. In one example, we reduced the halting problem (\(L_{Halt}\)) to the halting-on-empty problem (\(L_\varepsilon\)). The technique was to introduce a "meddler program", a program that rewrote some of the encoding of a Turing machine to transform it from an instance of one problem into an instance of another. This "meddler program" was itself a Turing program: it was implemented by a state in a Turing machine. These tools for reduction, that transform one problem into another, are our main concern today. While the following definition can also be formulated for general computational problems (functions \(f \colon S_1 \to S_2\)), we stick to decision problems for now.

(Computable Reducer) Let \(L_1, L_2 \subseteq A^*\) be languages. A reducer of \(L_1\) to \(L_2\) is a function (i.e., defined on all strings!) \(r \colon A^* \to A^*\) such that \[ w \in L_1 \text{ if and only if } r(w) \in L_2 \] We write \(r \colon L_1 \preceq L_2\) if \(r\) is a reducer of \(L_1\) to \(L_2\). A reducer is computable if there is a Turing machine \(\mathcal T\) with a state \(x\) such that \(\mathcal T_x = r\).

As was just mentioned, computable reducers came up earlier when we were proving undecidability results. Each of these is an example of such a computable reducer, and so we encourage the reader to go back and take a look at these. But for now, let us see a quick example.

(Reducers Reduce) Let \(L_1,L_2 \subseteq A^*\) be languages. If there is a reducer \(r \colon L_1 \preceq L_2\) and \(L_2\) is decidable, then \(L_1\) is decidable.

It's a good exercise to see that this is also true about recognizability.

(Recognizability and Reducers) Let \(L_1,L_2 \subseteq A^*\) be languages. Show that if there is a reducer \(r \colon L_1 \preceq L_2\) and \(L_2\) is recognizable, then \(L_1\) is recognizable.

Reducers are very practical,especially when we are faced with difficult problems (OK, but you could have guessed that!).

(Counting Pairs Suffices) Consider the two (hopefully familiar) examples: \[\begin{gathered} L_= = \{w \in \{0,1\}^* \mid \text{\(w\) has the same number of \(0\)s as \(1\)s}\}\\ L_{nn} = \{0^n 1^n \mid n \in \mathbb N\} \end{gathered}\] As you can probably imagine, the second language is much easier to design a decision procedure for than the first. In fact, here is such a decision procedure: eqnm.buck. It would be nice to be able to reuse that decision procedure to design a decision procedure for the first language. This is where a reducer could step in: we are going to design a Turing machine \(\mathcal S\) with a state \(\mathtt{01move}\) such that \[ \mathcal S_\mathtt{01move}(w) = 0^m1^n \] where \(w\) has \(m\) \(1\)s and \(n\) \(0\)s. Now, if \(\mathcal T\) is a Turing machine with a state \(\mathtt{eqnm}\) such that \[ \mathcal T_{\mathtt{eqnm}}(0^m1^n) = \begin{cases} 1 &\text{if \(n = m\)} \\ 0 &\text{if \(n \neq m\)} \end{cases} \] (i.e., \(\mathtt{eqnm}\) is a decision procedure for \(L_{nn}\)), then composing the two programs gives a decision procedure for \(L_=\). \[ \mathcal T_{\mathtt{eqnm}}(\mathcal S_{\mathtt{01move}}(w)) = \mathcal T_{\mathtt{eqnm}}(0^m1^n) = \begin{cases} 1 &\text{if \(n = m\)} \\ 0 &\text{if \(n \neq m\)} \end{cases} \] This gives an explicit program \(\mathtt{01move}\) that implements the reduction of the decision problem \(L_=\) to the decision problem \(L_{nn}\). In formal notation, \(\mathcal S_\mathtt{01move} \colon L_= \preceq L_{nn}\).

For the heck of it, let us actually design this reducer. The basic idea is going to be to "move" all of the 0s to the end of the tape, and then "move" all of the 1s to the end of the tape. In a bit more detail, \(\mathtt{01move}\) operates like this:
  1. Write a # to the end of the tape to mark that it is the end of the original input.
  2. Find a 0 in the original input. Replace it with an @ symbol and then write a 0 to the end of the tape.
  3. Search for another 0 in the original input. If there are no more 0s in the original input, rewind and go to the next step. Otherwise, go back to 2.
  4. Find a 1 in the original input. Replace it with an @ symbol and then write a 1 to the end of the tape.
  5. Search for another 1 in the original input. If there are no more 1s in the original input, rewind and go to the next step. Otherwise, go back to 4.
  6. Erase all of the @s and the # from the tape.
An explicit implementation in BuckLang is here: zero_one_mover.buck.

Something else significant about this example is that the reducer runs in polynomial time.

Polynomial Time Reducers

Reducers tell us which problems can be reduced to which other problems. But, just like how we can measure how difficult a problem is to solve, we can use the same tools to express how difficult it is to reduce one problem to another.

(Polynomial Time Reductions) Let \(L_1, L_2 \subseteq A^*\) be languages, and let \(\mathcal T\) at state \(x\) implement a reducer \(\mathcal T_x \colon L_1 \preceq L_2\). Then we say that \(\mathcal T_x\) is a polynomial time reducer if \(x\) runs in polynomial time, and that \(L_1\) is polynomial time reducible to \(L_2\).

The main observation about polynomial time reduction is the following statement.

Let \(L_1,L_2 \subseteq A^*\) be languages, and let \(\mathcal S\) at state \(\mathtt{red}\) implement a polynomial time reducer \(\mathcal S_{\mathtt{red}} \colon L_1 \preceq L_2\). If \(L_2 \in \mathsf P\), then \(L_1 \in \mathsf P\).
Let \(\mathcal T = (Q, A, \delta)\) at the state \(x \in Q\) be a decision procedure for \(L_2\) that runs in polynomial time. Then, as we observed before, the composition of \(\mathcal T_x\) with \(\mathcal S_\mathtt{red}\) is a decision procedure for \(L_1\). Now, if \(\mathcal T\) at \(x\) runs in \(\mathcal O(n^k)\)-time and \(\mathcal S\) at \(\mathtt{red}\) runs in \(\mathcal O(n^l)\)-time, then their composition runs in \(\mathcal O(n^k + n^l) = \mathcal O(n^{\max\{k,l\}})\)-time (they run one after the other). It follows that \(L_1\) is decidable in polynomial time, i.e., \(L_1 \in \mathsf{P}\).
(Counting Pairs Suffices Again) Let us return to the example from before. In that example, we exhibited a reducer \(\mathtt{01move} \colon L_= \preceq L_{nn}\), and gave an explicit description of the steps involved in the implementation. Let us take a moment to analyze the algorithm implemented by \(\mathtt{01move}\).

Let \(n = \mathsf{len}(w)\) for some given input word \(w \in \{0,1\}^*\). All 6 of the steps in the implementation of \(\mathtt{01move}\) run in \(\mathcal O(n)\)-time, individually. Steps 2-3 and 4-5 are loops that repeat at most \(n\) times each, so the whole algorithm runs in \(\mathcal O(n + n^2 + n^2 + n) = \mathcal O(n^2)\)-time. Hence, \(\mathtt{01move}\) runs in polynomial time. It is also true that \(\mathtt{eqnm}\) runs in polynomial time (you are about to show this), so we conclude from the Reducers Reduce lemma above that \(L_= \in \mathsf{P}\).
(Left-Right Rate) What is the asymptotic growth rate of the worst-case runtime of \(\mathtt{eqnm}\)? What does that make the asymptotic runtime of our decision procedure above, \(\mathtt{eqnm{.}01move}\)?
(Triple?) Consider the language \[ L_+ = \{a^n b^m c^{n + m} \mid n,m \in \mathbb N\} \subseteq \{a,b,c\}^* \] Design a computable reducer \(\mathcal S_{\mathtt{red}} \colon L_+ \preceq L_{nn}\) that runs in polynomial time. Explain why this implies that \(L_+ \in \mathsf{P}\).
(Composing Reductions Again) Let \(L_1,L_2, L_3 \subseteq A^*\) be languages. Let \(r_1 \colon L_1 \preceq L_2\) and \(r_2 \colon L_2 \preceq L_3\). Prove that if \(r_1\) and \(r_2\) are computable in polynomial time, then \(r_2 \circ r_1 \colon L_1 \preceq L_3\) is a polynomial time reduction. Conclude that if \(L_3 \in \mathsf{P}\), then \(L_1 \in \mathsf{P}\).
← 4.0 Timing Turing Machines4.2 Nondeterminism And Complexity →
Top