It’ll probably work out: improved listdecoding through random operations^{1}^{1}1AR’s research supported in part by NSF CAREER grant CCF0844796 and NSF grant CCF1161196. MW’s research supported in part by a Rackham predoctoral fellowship.
^{2}^{2}footnotemark: 2 Department of Computer Science and Engineering,
University at Buffalo, SUNY
^{3}^{3}footnotemark: 3 Department of Mathematics,
University of Michigan
Abstract
In this work, we introduce a framework to study the effect of random operations on the combinatorial list decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the listdecoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of listdecodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding:

We show the existence of binary codes that are combinatorially listdecodable from fraction of errors with optimal rate that can be encoded in linear time.

We show that any code with relative distance, when randomly folded, is combinatorially listdecodable fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes.

We show that any code which is listdecodable with suboptimal list sizes has many subcodes which have nearoptimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity.
The first two results follow from the techniques of Wootters (STOC 2013) and Rudra and Wootters (STOC 2014); one of the main technical contributions of this paper is to demonstrate the generality of the techniques in those earlier works. The last result follows from a simple direct argument.
1 Introduction
The goal of error correcting codes is to enable communication between a sender and receiver over a noisy channel. For this work, we will think of a code of block length and size over an alphabet as an matrix over , where each column in the matrix is called a codeword. The sender and receiver can use for communication as follows. Given one of messages—which we think of as indexing the columns of —the sender transmits the corresponding codeword over a noisy channel. The receiver gets a corrupted version of the transmitted codeword and aims to recover the originally transmitted codeword (and hence the original message). Two primary quantities of interest are the fraction of errors that the receiver can correct (the error rate); and the redundancy of the communication, as measured by the rate of the code. The central goal is to design codes so that both and are large.
A common approach to this goal is to first design a code matrix that is “somewhat good,” and to modify it to obtain a better code . Many of these modifications correspond to row or column operations on the matrix : for example, dropping of rows or columns, taking linear combinations of rows or columns, and combining rows or columns into “mega” rows or columns. In this work, we study the effects of such row and columnoperations on the list decodability of the code .
List decoding.
In the list decoding problem [Eli57, Woz58], the receiver is allowed to output a small list of codewords that includes the transmitted codeword, instead of having to pin down the transmitted codeword exactly. The remarkable fact about list decoding is that the receiver may correct twice as many adversarial errors as is possible in the unique decoding problem. Exploiting this fact has led to many applications of list decoding in complexity theory and in particular, pseudorandomness.^{2}^{2}2See the survey by Sudan [Sud00] and Guruswami’s thesis [Gur04] for more on these applications.
Perhaps the ultimate goal of list decoding research is to solve the following problem.
Problem 1.
For , construct codes with rate that can correct fraction of errors with linear time encoding and linear time decoding.^{3}^{3}3One needs to be careful about the machine model when one wants to claim linear runtime. In this paper we consider the RAM model. For the purposes of this paper, it is fine to consider linear time to mean linear number of operations and the alphabet size to be be small, say polynomial in . Above, denotes the ary entropy, and is known to be the optimal rate.
Even though much progress has been made in algorithmic list decoding, we are far from answering the problem above in its full generality. If we are happy with polynomial time encoding and decoding (and large enough alphabet size), then the problem was solved by Guruswami and Rudra [GR08], and improved by several followup results [GW13, Kop12, GX12, GX13, DL12, GK13]. However, even with all of this impressive work on algorithmic list decoding, the landscape of listdecoding remains largely unexplored. First, while the above results offer concrete approaches to Problem 1, we do not have a good characterization of which codes are even combinatorially listdecodable at nearoptimal rate. Second, while we have polynomialtime encoding and decoding, lineartime remains an open problem. In this work, we make some progress in both of these directions.
New codes from old: random operations.
In this paper, we develop a framework to study the effect of random operations on the listdecodability of a code. Specific instantiations of these operations are a common approach to Problem 1. For example,

In the Folded ReedSolomon codes mentioned above, one starts with a ReedSolomon code and modifies it by applying a folding operation to each codeword. In the matrix terminology, we bunch up rows to construct “mega” rows.

In another example mentioned above [GX13], one starts with a ReedSolomon code and picks certain positions in the codeword, and also throws away many codewords—that is, one applies a puncturing operation the codewords, and then considers a subcode. In matrix terminology, we drop rows and columns.
However, in all of these cases, the operations used are very structured; in the final two, the rate of the code also takes a hit.^{4}^{4}4It must be noted that in the work of [Tre03, IJKW10] the main objective was to obtain sublinear time list decoding and the suboptimal rate is not crucial for their intended applications. It is natural to ask how generally these operations can be applied. In particular, if we considered random versions of the operations above, can we achieve the optimal rate/error rate/list size tradeoffs? If so, this provides more insight about why the structured versions work.
Recently the authors showed in [RW14] that the answer is “yes” for puncturing of the rows of the code matrix: if one starts with any code with large enough distance and randomly punctures the code, then with high probability the resulting code is nearly optimally combinatorially listdecodable. In this work, we extend those results to other operations.
1.1 Our contributions and applications
The contributions of this paper are twofold. First, the goal of this work is to improve our understanding of (combinatorial) listdecoding. What is it about these structured operations that succeed? How could we generalize? Of course, this first point may seem a bit philosophical without some actual deliverables. To that end, we show how to use our framework to address some open problems in list decoding. We outline some applications of our results below.
In order to state our main results, we pause briefly to set the quantitative stage. There are two main parameter regimes for listdecoding, and we will focus on both in this paper. In the first regime, corresponding the the traditional communication scenario, the error rate is some constant . In the second regime, motivated by applications in complexity theory, the error rate is very large. For ary codes, these applications require correction from a fraction of errors, for small . In both settings, the best possible rate is given by
where denotes the ary entropy. In the second, large, regime, we may expand to obtain an expression
For complexity applications it is often enough to design a code with rate with the same error correction capability.
1.1.1 Linear time encoding with near optimal rate.
We first consider the special case of Problem 1 that concentrates on the encoding complexity for binary codes in the high error regime:
Question 1.
Do there exist binary codes with rate that can be encoded in linear time and are (combinatorially) listdecodable from a fraction of errors?
Despite much progress on related questions, obtaining linear time encoding with (near)optimal rate is still open. More precisely, for ary codes (for sufficiently large, depending on ), Guruswami and Indyk showed that linear time encoding and decoding with nearoptimal rate is possible for unique decoding [GI05]. For list decoding, they prove a similar result for list decoding but the rate is exponentially small in [GI03]. This result can be used with code concatenation to give a similar result for binary codes (see Appendix B for more details) but also suffers from an exponentially small rate. If we allow for superlinear time encoding in Question 1, then it is known that the answer is yes. Indeed, random linear codes will do the trick [ZP82, CGV13, Woo13] and have quadratic encoding time; In fact, nearlinear time encoding with optimal rate also follows from known results.^{5}^{5}5For example, Guruswami and Rudra [GR10] showed that folded ReedSolomon codes—which can be encoded in nearlinear time—concatenated with random inner codes with at most logarithmic block length achieve the optimal rate and fraction of correctable errors tradeoff.
Our results.
We answer Question 1 in the affirmative. To do this, we consider the rowoperation on codes given by taking random XORs of the rows of . We show that this operation yields codes with rate that are combinatorially listdecodable from fraction of errors, provided the original code has constant distance and rate. Instantiating this by taking to be Spielman’s code [Spi96], we obtain a lineartime encodable binary code which is nearlyoptimally listdecodable.
1.1.2 The folding operation, and random wise direct product.
The result of Guruswami and Rudra [GR08] showed that when the folding operation is applied to ReedSolomon codes, then the resulting codes (called folded ReedSolomon codes) can be list decoded in polynomial time with optimal rate. The folding operation is defined as follows. We start with a ary code of length , and a partition of into sets of size , and we will end up with a ary code of length . Given a codeword , we form a new codeword by “bunching” together the symbols in each partition set and treating them as a single symbol. A formal definition is given in Section 2. For large enough , this results in codes that can list decode from fraction of errors with optimal rate [GR08, GX12, GX14] when one starts with ReedSolomon or more generally certain algebraicgeometric codes. In these cases, the partition for folding is very simple: just consider consecutive symbols to form the partition sets.
Folding is a special case of wise aggregation of symbols. Given a code of length , we may form a new code of length by choosing subsets and aggregating symbols according to these sets. This operation has also been used to good effect in the listdecoding literature: in [GI01, GI03, GI05], the sets are defined using expander codes, and the original code is chosen to be listrecoverable. This results in efficiently listdecodable codes, although not of optimal rate. We can also view this wise aggregation as a puncturing of a wise direct product (where and all sets of size are included).
There is a natural intuition for the effectiveness of the folding operation in [GR08, GR09], and for the wise aggregation of symbols in [GI01, GI03, GI05]. In short, making the symbols larger increases the size of the “smallest corruptable unit,” which in turn decreases the number of error patterns we have to worry about. (See Section 5.2 for more on this intuition). In some sense, this intuition is the reason that random codes over large alphabets can tolerate more error than random codes over small alphabets: indeed, an inspection of the proof that random codes obtain optimal listdecoding parameters shows that this is the crucial difference. Since a random code over a large alphabet is in fact a folding of a random code over a small alphabet, the story we told above is at work here.
Despite this nicesounding intuition—which doesn’t use anything specific about the code—the known results mentioned above do not use it, and rely crucially on specific properties of the original codes, and on algorithmic arguments. It is natural to wonder if the intuition above can be made rigorous, and to hold for any original code . In particular,
Question 2.
Can the above intuition be made rigorous? Precisely, are there constants , so that for any , any code with distance at least and rate at most admits a wise folding (or other wise aggregation of symbols with ) for depending only on , such that the resulting code is combinatorially listdecodable from a fraction of errors?
The first question mimics the parameters of folded ReedSolomon codes; the second part is for the parameter regime of [GI01, GI03, GI05]. Notice that both the requirements (distance and rate ) are necessary. Indeed, if the original code does not have distance bounded below by a constant, it is easy to come up with codes where the answer to the above question is “no.” The requirement of on the rate of the original code is needed because folding preserves the rate, and the listdecoding capacity theorem implies that any code that can be list decoded from fraction of errors must have rate .
Our results.
We answer Question 2 in the affirmative by considering the operation of random wise aggregation. We show that if (the parameter regime for wise folding), the resulting code is listdecodable from a fraction of errors, as long as . Our theory can also handle the case when , and obtain nearoptimal rate at the same time.
1.1.3 Taking subcodes.
The result of Guruswami and Rudra [GR08], even though it achieves the optimal tradeoff between rate and fraction of correctable errors is quite far from achieving the best known combinatorial bounds on the worstcase list sizes. Starting with the work of Guruswami [Gur11], there has been a flurry of work on using subspace evasive subsets to drive down the list size needed to achieve the optimal list decodability [GW13, DL12, GX12, GX13, GK13]. The basic idea in these works is the following: we first show that some code has optimal rate vs fraction of correctable tradeoff but with a large list size of . In particular, this list lies in an affine subspace of roughly dimensions. A subspace evasive subset is a subset that has a small intersection with any low dimension subset. Thus, if we use such a subset to pick a subcode of , then the resulting subcode will retain the good list decodable properties but now with smaller worstcase lists size. Perhaps the most dramatic application of this idea was used by Guruswami and Xing [GX13] who show that certain ReedSolomon codes have (nontrivial) exponential list size and choosing an appropriate subcode with a subspace evasive subset reduces the list size to a constant.
However, the intuition that using a subcode can reduce the worstcase list size is not specifically tied to the algebraic properties of the code (i.e, to ReedSolomon codes and subspace evasive sets). As above, it is natural to ask if this intuition holds more broadly.
Question 3.
Given a code, does there always exist a subcode that has the same list decoding properties as the original code but with a smaller list size? In particular, is this true for random subcodes?
Our results.
We answer Question 3 by showing that for any code, a random subcode with the rate smaller only by an additive factor of can correct the same fraction of errors as the original code but with a list size of as long as the original list size is at most . Guruswami and Xing [GX13] showed that ReedSolomon codes defined over (large enough) extension fields with evaluation points coming from a (small enough) subfield has nontrivial list size of . Thus, our result then implies the random subcodes of such ReedSolomon codes are optimally list decodable.^{6}^{6}6Guruswami and Xing also prove a similar result (since a random subset can be shown to be subspace evasive) so ours gives an arguably simpler alternate proof. We also complement this result by showing that the tradeoff between the loss in rate and the final list size is the best one can hope for in general. We also use the positive result to show another result: given that is optimally list decodable up to rate , its random subcodes (with the appropriate rate) with high probability are also optimally list decodable for any error rate .
1.1.4 Techniques
Broadly speaking, the operations we consider fall into two categories: rowoperations and columnoperations on the matrix . We use different approaches for the different types of operations.
For row operations (and Questions 1 and 2) we use the machinery of [Woo13, RW14] in a more general context. In those works, the main motivations were specific families of codes (random linear codes and ReedSolomon codes). In this work, we use the technical framework (implicit in) those earlier papers to answer new questions. Indeed, one of the contributions of the current work is to point out that in fact these previous arguments apply very generally. For column operations, our results follow from a few simple direct arguments (although the construction for the lower bound requires a bit of care).
Remark 4.
We will specifically handle all row operations on the code matrix mentioned at the beginning of the introduction. For column operations, we handle only column puncturing (taking random subcodes). For many operations, this is not actually an omission: some of the columnanalogues of the rowoperations we consider are redundant. For example, taking random linear combinations of columns of a linear code has the same distribution as a random column puncturing. We do not handle bunching up of columns into mega columns, which would correspond to designing interleaved codes—see Section 2 for a formal definition—and we leave the solution of this problem as an open question.
1.2 Organization
In Section 2, we set up our formal framework and present an overview of our techniques in Section 3. In Section 4, we state and prove our results about the listdecodability of codes under a few useful random operations; these serve to give examples for our framework. They also lay the groundwork for Section 5, where we return to the three applications we listed above, and resolve Questions 1, 2, and 3. Finally, we conclude with some open questions.
2 Setup
In this section, we set notation and definitions, and formalize our notion of row and column operations on codes. Throughout, we will be interested in codes of length and size over an alphabet . Traditionally, is a set of codewords. As mentioned above, we will treat as a matrix in , with the codewords as columns. We will abuse notation slightly by using to denote both the matrix and the set; which object we mean will be clear from context. For a prime power , we will use to denote the finite field with elements.
For , we will use to denote the Hamming distance between and , and we will use to denote the agreement between and . We study the listdecodability of : we say that is listdecodable if for all , . In this work, we will also be interested in the slightly stronger notion of averageradius listdecodability.
Definition 1.
A code is averageradius listdecodable if for all sets with ,
Averageradius listdecodability implies listdecodability [GN13, RW14]. Indeed, the mandate of averageradius list decodability is that, for any codewords in , they do not agree too much on average with their center, . On the other hand, standard list decodability requires that for any codewords in , at least one does not agree too much with . As the average is always smaller than the maximum, standard listdecodability follows from averageradius listdecodability.
We will create new codes from original codes ; notice that we allow the alphabet to change, as well as the size and block length of the code. We will consider code operations which act on rows and columns of the matrix .
We say that a basic row operation takes a code and produces a row of a new matrix : that is, it is a function
Two examples of basic row operations that we will consider in this paper are taking linear combinations of rows or aggregating rows. That is:

When , and for a vector , the row operation corresponding to linear combinations of rows is , given by

Let be a set of size , and let . Then the row operation corresponding to aggregating rows is , given by
(Above, we have replaced with to ease the number of subscripts).
We will similarly consider basic column operations
which take a code and produce a new column of a matrix . Analogous to the row operations, we have the following two examples.

When , and for a vector , we can consider

Let be a set of size , and let . Then
The code operations that we will consider in this paper are distributions over a collection of random basic row operations or collection of random basic column operations:
Definition 2.
A random row operation is a distribution over tuples of basic row operations. We treat a draw from as a code operation mapping to by defining the row of to be . Similarly, a random column operation is a distribution over tuples of basic column operations.
We say a random row (column) operation has independent symbols (independent codewords resp.) if the coordinates are independent. We say a random row operation has symbols drawn independently without replacement if are drawn uniformly at random without replacement from some set of basic row operations.
Finally, for a random row operation and a sample from note that the columns of are in onetoone correspondence with the columns of . Thus, we will overload notation and denote for to denote the column in corresponding to the codeword .
Below, we list several specific random row operations that fit into our framework.

Random Sampling: Let be any alphabet, and let , where is the uniform distribution on the basic row operations for , where is the standard basis vector. Thus, each row of is a row of , chosen independently uniformly with replacement.

Random Puncturing: Same as above except are chosen without replacement.

Random wise XOR: Let and . is the uniform distribution over the basic row operations
That is, to create a new row of , we choose positions from and XOR them together.

Random wise aggregation: Let , for any alphabet , and let , where is the uniform distribution over the basic row operations

Random wise folding: Let , for any alphabet . For each partition of into sets of size , consider the row operation where
Let be the uniform distribution over for all partitions .
The following column operations also fit into this framework; in this paper, we consider only the first. We mention the second operation (random interleaving) in order to parallel the situation with columns. We leave it as an open problem to study the effect of interleaving.

Random subcode: Let be any alphabet, and let , where is the uniform distribution on the basic column operations
That is, is formed from by choosing codewords independently, uniformly, with replacement from .
Notice that if is a linear code over , then this operation is the same if we replace with all of , or with all vectors of a fixed weight, etc. Thus, we do not separately consider random XOR (or inner products), as we do with columns.

Random wise interleaving: In this case . is the uniform distribution over the basic column operations
3 Overview of Our Techniques
Random Row Operations.
In addition to answering Questions 1 and 2, one of the contributions of this work is to exhibit the generality of the techniques developed in [RW14]. As such, our proofs follow their framework. In that work, there were two steps: the first step was to bound the listdecodability in expectation (this will be defined more precisely below), and the second step was to bound the deviation from the expectation. In this work, we use the deviation bounds as a black box, and it remains for us to bound the expectation. We would also like to mention that we could have answered Questions 1 and 2 by applying the random puncturing results from [Woo13, RW14] as a black box to the XOR and direct product of the original code. We chose to unpack the proof to illustrate the generality of the proof technique developed in [Woo13, RW14] (and they also seem necessary to prove the generalization to the operation of taking random linear combinations of the rows of the code matrix).
The results on random row operations in this paper build on the approaches of [Woo13, RW14]. While those works are aimed at specific questions (the listdecodability of random linear codes and of ReedSolomon codes with random evaluation points), the approach applies more generally. In this paper, we interpret the lessons of [Woo13, RW14] as follows:
If you take a code over that is listdecodable (enough) up to , and do some random (enough) stuff to the symbols, you will obtain a new code (possibly over a different alphabet ) which is listdecodable up to . If the random stuff that you have done happens to, say, increase the rate, then you have made progress.
First, our notion of a random row operation being random enough is the same as having independent symbols (or independent symbols without replacement). Now, we will quantify what it means to be “listdecodable enough” in the setup described above. We introduce a parameter , defined as follows:
(1) 
The quantity captures how listdecodable is in expectation. Indeed, is the quantity controlled by averageradius listdecodability (Definition 1). To make a statement about the actual averageradius listdecodability of (as opposed to in expectation), we will need to understand when the expectation and the maximum are reversed:
Theorem 2.
Let and be as above, and suppose that has independent symbols. Fix . Then
where
for an absolute constant . For , we have
Theorem 2 makes the intuition above more precise: Any “random enough” operation (that is, an operation with independent symbols) of a code with good “averageradius listdecodability” (that is, good ) will result in a code which is also listdecodable. In Appendix C, we show that Theorem 2 in fact implies the same result when “random enough” is taken to be mean that has symbols drawn independently at random instead:
Corollary 1.
Theorem 2 holds when “independent symbols” is replaced by “symbols drawn independently without replacement”.
Random Column Operations.
Our result on random subcodes follows from a simple probabilistic method. The argument for showing that the parameters in this positive result cannot be improved, we construct a specific code . The code consists of various “clusters”, where each cluster is the set of all vectors that are close to some vector in another code . The code has the property that it is list decodable from a large fraction of errors and that for smaller error rate its list size is suitably smaller– the existence of such a code with exponentially many vectors follows from the standard random coding argument. This allows the original code to even have good averageradius list decodability. The fact that the cluster vectors are very close to some codeword in (as well as the fact that has large enough distance) basically then shows that the union bound used to prove the positive result is tight.
4 General Results
In this section, we state our results about the effects of some particular random operations—XOR, aggregation, and subcodes—on listdecodability. In Section 5, we will revisit these operations and resolve Questions 1, 2 and 3.
4.1 Random wise XOR
In this section, we consider the rowoperation of wise XOR. We prove the following theorem.
Theorem 3.
Let be a code with distance . Let , as defined in Section 2, and consider the code operation . Suppose that . Then for sufficiently small and large enough , with probability , is averageradius list decodable and has rate .
With the goal of using Theorem 2, we begin by computing the quantity .
Lemma 1.
Let be a code with distance , and suppose . Then
The proof of Lemma 1 follows from an application of an averageradius Johnson bound (see Appendix A for more on these bounds). The proof is given in Appendix D.1. Given Lemma 1, Theorem 2 implies that with constant probability,
In particular, if , then in the favorable case is averageradius listdecodable, for and for some constant .
It remains to verify the rate of . Notice that if , then we are done, because then the requirement reads
Thus, to complete the proof we will argue that is injective with high probability, and so in the favorable case . Fix . Then, by the same computations as in the proof of Lemma 1,
Using the fact that we will choose , the right hand side is
for sufficiently small . Thus, by the union bound on the choices for the pairs of distinct codewords , we see that , which is as desired. This completes the proof of Theorem 3.
4.2 Random wise aggregation
Theorem 4 below analyzes wise aggregation in two parameter regimes. In the first parameter regime, we address Question 2, and we consider wise direct product where . In this case, final code will have the same rate as the original code , and so in order for to be listdecodable up to radius , the rate of must be . Item 1 shows that if this necessary condition is met (with some logarithmic slack), then is indeed listdecodable up to . In the second parameter regime, we consider what can happen when the rate of is significantly larger. In this case, we cannot hope to take as small as and hope for listdecodability up to . The second part of Theorem 4 shows that we may take nearly as small as the listdecoding capacity theorem allows.
Theorem 4.
There are constants , , so that the following holds. Suppose . Let be a code with distance .

Suppose . Suppose that has rate
Let , and let be the wise aggregation operation of Section 2. Draw , and let . Then with high probability, is averageradius listdecodable, and further the rate of satisfies .

Suppose that , and suppose that has rate so that
Choose so that
Let be the wise aggregation operation of Section 2. Draw , and let . Then with high probability, is averageradius listdecodable, and the rate of is at least
The rest of this section is devoted to the proof of Theorem 4. As before, it suffices to control .
Lemma 2.
With the setup above, we have
Again, the proof of Lemma 2 follows from an averageradius Johnson bound. The proof is given in Appendix D.1. Then by Theorem 2, recalling that
and , we have with high probability that
In the favorable case,
(2) 
As before, is averageradius listdecodable, for some constant , as long as the right hand side is no more than . This holds as long as
(3) 
Equation (3) holds for any choice of . First, we prove item and we focus on the case that ; this mimics the parameter regime the definition of folding (which addresses Question 2). Given , we can translate (3) into a condition on , the rate of . We have
and so translating (3) into a requirement on , we see that as long as
then with high probability is listdecodable. Choose so that this holds. It remains to verify that the rate of is the same as the rate of . The (straightforward) proof is deferred to Appendix D.2.
Claim 5.
With as above and with , with probability at least .
By a union bound, with high probability both the favorable event (2) occurs, and Claim 5 holds. In this case, is listdecodable, and the rate of is
Next, we consider Item 2, where we may choose , thus increasing the rate. It remains true that as long as (3) holds, then is listdecodable. Again translating the condition (3) into a condition on , we see that as long as
(4) 
then is listdecodable. Now we must verify that the lefthandside of (4) is indeed the rate of , that is, that . As before, the proof is straightforward and is deferred to Appendix D.3.
Claim 6.
With as above and with arbitrary, with probability at least .
4.3 Random subcodes
In this section we address the case of random subcodes. Unlike the previous sections, the machinery of [RW14, Woo13] does not apply, and so we prove the results in this section directly. We have the following proposition.
Proposition 1.
Let be any list decodable ary code. Let be a random subcode of with (as in the definition in Section 2), where
With probability , the random subcode is list decodable. Further, the number of distinct columns n is at least .
The proof of Proposition 1 follows straightforwardly from some Chernoff bounds. We defer the proof to Appendix E.2.
Remark 6.
In Proposition 1, the choice of for the final list size was arbitrary in the sense that the can be made arbitrarily close to (assuming is small enough).
Proposition 1 only works for the usual notion of list decodability. It is natural to wonder if a similar result holds for averageradius list decodability. We show that such a result indeed holds (though with slightly weaker parameters) in Appendix E.
It is also natural to wonder if one can pick a larger value of —closer to than to —in the statement of Proposition 1. In particular, if is polynomial in , could we pick ? In Appendix E, we show that this is not in general possible. More precisely, we show the following theorem.
Theorem 7.
For every , and for every , and for every sufficiently large, there exists a code with block length that is averageradius list decodable such that the following holds. Let be obtained by picking a random subcode of of size where . Then with high probability if is list decodable for any , then .
5 Applications
5.1 Linear time near optimal list decodable codes
First, we answer Question 1, and give lineartime encodable binary codes with the optimal tradeoff between rate and listdecoding radius. Our codes will work as follows. We begin with a lineartime encodable code with constant rate and constant distance; we will use Spielman’s variant on expander codes [Spi96, Theorem 19]. These codes have rate , and distance (a small positive constant). Notice that a random puncturing of (as in [Woo13, RW14]) will not work, as does not have good enough distance—however, a random XOR, as in Section 4.1 will do the trick.
Corollary 2.
There is a randomized construction of binary codes so that the following hold with probability , for any sufficiently small and any sufficiently large .

is encodable in time .

is averageradius listdecodable with and , where is an absolute constant.

has rate .
Indeed, let be as above. Let , and choose , as in Theorem 3. Let . Items 2. and 3. follow immediately from Theorem 3, so it remains to verify Item 1 of Theorem 2, that is lineartime encodable. Indeed, we have
where is a matrix whose rows are binary vectors with at most nonzeros each. In particular, the time to multiply by is , as claimed.
5.2 Random Folding
Next, we further discuss Question 2, which asked for a rigorous version of the intuition behind results for folded ReedSolomon codes and expanderbased symbol aggregation. The intuition is that increasing the alphabet size effectively reduces the number of error patterns a decoder has to handle, thus making it easier to listdecode. To make this intuition more clear, consider the following example when . Consider an error pattern that corrupts a fraction of the odd positions (the rest do not have errors). This error pattern must be handled by any decoder which can list decode from fraction of errors. On the other hand, consider a folding (with partition as above) of the code; now the alphabet size has increased, so we hope to correct fraction of errors. However, the earlier error pattern affects a of the new, folded symbols. Thus, in the folded scenario, an optimal decoder need not handle this error pattern, since (for small enough ).
In Theorem 4, Item 1, we have shown that if is any code with distance bounded away from and with rate sufficiently small (slightly sublinear in ), has abundant random wise aggregation of symbols which are listdecodable up to a fraction of errors, when and is large enough (depending only on and ). This is the same parameter regime as folded ReedSolomon codes (up to logarithmic factors in the rate), and thus the Theorem answers Question 2 insofar as it lends a rigorous way to interpret wise aggregation in this parameter regime.
Remark 7.
While the intuition above applies equally well to folding and more general wise symbol aggregation, We note that a random folding and a random symbol aggregation are not the same thing. In the latter, the symbols of the new code may overlap, while in the former they may not. However, allowing overlap makes our computations simple; since the goal was to better understand the intuition above, we have done our analysis for the simpler case of wise symbol aggregation. It is an interesting open question to find a (clean) argument for the folding operation, perhaps along the lines of the argument of Corollary 1 for puncturing vs. sampling.
5.3 Applications of random subcodes
Finally, we observe that Proposition 1 immediately answers Question 3 in the affirmative. Indeed, suppose that is listdecodable with rate . Then Proposition 1 implies that with high probability, for any sufficiently small , a random subcode of rate
is listdecodable. In particular, if we start out with a binary code with constant rate and large but subexponential list size, the resulting subcode will also have constant rate, and constant list size.
For example, this has immediate applicatons for ReedSolomon codes. Guruswami and Xing [GX13] showed that for every real , and prime power , there is an integer such that ReedSolomon codes defined over with the evaluation points being of rate can be list decoded from the optimal fraction of errors with list size . Thus, Proposition 1 implies that random subcodes of these codes are optimally list decodable (in all the parameters). We remark that this result also follows from the work of Guruswami and Xing [GX13]; our argument above is arguably simpler, but does not come with an algorithmic guarantee as results of [GX13] do.
Given Proposition 1, it is natural to ask about the listdecodability of the subcode when the error radius may be different than . It turns out that this also follows from Proposition 1: below, we will use Proposition 1 to argue that if a code is optimally list decodable for some fixed fraction of errors, then its random subcodes with high probability are optimally list decodable from fraction of errors for any . Towards that end, we will make the following simple observation:
Lemma 3.
Let be list decodable ary code. Then for every , is also list decodable, where
Proof.
Consider a received word such that . Now we claim that there exists a such that
(5)  
(6) 
In the above the second inequality follows from the following facts: volume of ary Hamming balls of radius are bounded from above by and from below by (and that ). (6) along with the fact that is list decodable proves the claimed bound on .
To complete the proof we argue (5): we show the existence of by the probabilistic method:^{7}^{7}7This part of the proof is similar to the argument used to prove the EliasBassalygo bound [GRS14]. pick uniformly at random. Fix a . Then
Next we argue that
(7) 
Note that the above implies that
which would prove (5). To see why (7) is true, consider any positions where and agree on. Note that if we change all of those values (to any of the possibilities) to obtain , then we have and , which proves (7). ∎
Corollary 3.
Let . Let be a list decodable ary code with optimal rate . Then for any , with probability at least , a random subcode of of rate is list decodable.
Remark 8.
The bound in Lemma 3 is tight up to the factor. In particular, one cannot have a bound of for any since that would contradict the list decoding capacity bounds.
6 Open Questions
In this work we have made some (modest) progress on understanding on how random row and column operations change the list decodability of codes. We believe that our work highlights many interesting open questions. We list some of our favorites below:

We did not present any results for random wise interleaving. Gopalan, Guruswami and Raghavendra have shown that for any code its wise interleaved code (that is the code that deterministically applies all possible basic column operations that bunch together the subsets of columns of size ) the list decodability does not change by much [GGR11]. In particular, they show that if is list decodable then is list decodable. However, for random wise interleaving the list decoding radius might actually improve.^{8}^{8}8If this were to be the case then this could formalize the reason why the ParvareshVardy codes [PV05], which are subcodes of interleaving of ReedSolomon codes, have good list decodability properties. We leave open the question of resolving this possibility.

As mentioned above, our work, and the results of Guruswami and Xing [GX13], shows that random subcodes of ReedSolomon codes over (for large enough ) with evaluation points from the subfield have optimal list decodable properties. We believe that we should be able to derive such a result even if we start from any ReedSolomon codes or at the very least if one starts off with a randomly punctured ReedSolomon codes. Note that even though the results of [RW14] give near optimal list decodability results of ReedSolomon codes, their results are logarithmic factors off from the optimal rate bounds. Proposition 1 implies that it suffices to prove a nontrivial exponential bound on the list size for list decoding rate ReedSolomon codes from fraction of errors—a special case of this is proved in [GX13], but the general question is open.

All of our results so far only use either just random row operation or just random column operations. An open question is to find applications where random row and column operations could be use together to obtain better results than either on their own. The above point would be such an example, if resolved.
Acknowledgments
We thank Swastik Kopparty and Shubhangi Saraf for initial discussions on Questions 1 and 2 (and for indeed suggesting the random XOR as an operation to consider) and Dagstuhl for providing the venue for these initial discussions. We thank Venkat Guruswami for pointing out the argument in Appendix B. Finally, we thank Parikshit Gopalan for pointing the connection of our results to existing results on XOR and direct product codes. MW also thanks the theory group at IBM Almaden for their hospitality during part of this work.
References
 [Bli86] Volodia M. Blinovsky. Bounds for codes in the case of list decoding of finite volume. Problems of Information Transmission, 22(1):7–19, 1986.
 [Bli05] V. M. Blinovsky. Code bounds for multiple packings over a nonbinary finite alphabet. Probl. Inf. Transm., 41(1):23–32, 2005.
 [Bli08] V. M. Blinovsky. On the convexity of one codingtheory function. Probl. Inf. Transm., 44(1):34–39, 2008.
 [CGV13] Mahdi Cheraghchi, Venkatesan Guruswami, and Ameya Velingker. Restricted isometry of fourier matrices and list decodability of random linear codes. In Proceedings of the TwentyFourth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 432–442, 2013.
 [DL12] Zeev Dvir and Shachar Lovett. Subspace evasive sets. In Proceedings of the 44th Symposium on Theory of Computing Conference (STOC), pages 351–358, 2012.
 [Eli57] Peter Elias. List decoding for noisy channels. Technical Report 335, Research Laboratory of Electronics, MIT, 1957.
 [GGR11] Parikshit Gopalan, Venkatesan Guruswami, and Prasad Raghavendra. List decoding tensor products and interleaved codes. SIAM J. Comput., 40(5):1432–1462, 2011.
 [GI01] Venkatesan Guruswami and Piotr Indyk. Expanderbased constructions of efficiently decodable codes. In Proceedings of the 42nd Annual IEEE Symposium on the Foundations of Computer Science (FOCS), pages 658–667. IEEE, 2001.
 [GI03] Venkatesan Guruswami and Piotr Indyk. Linear time encodable and list decodable codes. In Proceedings of the 35th Annual ACM Symposium on Theory of Computing (STOC), pages 126–135, 2003.
 [GI05] Venkatesan Guruswami and Piotr Indyk. Lineartime encodable/decodable codes with nearoptimal rate. IEEE Transactions on Information Theory, 51(10):3393–3400, 2005.
 [GK13] Venkatesan Guruswami and Swastik Kopparty. Explicit subspace designs. In FOCS, 2013. To appear.
 [GN13] Venkatesan Guruswami and Srivatsan Narayanan. Combinatorial limitations of averageradius list decoding. RANDOM, 2013.
 [GR08] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Errorcorrection with optimal redundancy. IEEE Transactions on Information Theory, 54(1):135–150, 2008.
 [GR09] Venkatesan Guruswami and Atri Rudra. Error correction up to the informationtheoretic limit. Commun. ACM, 52(3):87–95, 2009.
 [GR10] Venkatesan Guruswami and Atri Rudra. The existence of concatenated codes listdecodable up to the hamming bound. IEEE Transactions on Information Theory, 56(10):5195–5206, 2010.
 [GRS14] Venkatesan Guruswami, Atri Rudra, and Madhu Sudan. Essential coding theory, 2014. Draft available at http://www.cse.buffalo.edu/ atri/courses/codingtheory/book/index.html.
 [Gur04] Venkatesan Guruswami. List Decoding of ErrorCorrecting Codes (Winning Thesis of the 2002 ACM Doctoral Dissertation Competition), volume 3282 of Lecture Notes in Computer Science. Springer, 2004.
 [Gur11] Venkatesan Guruswami. Linearalgebraic list decoding of folded reedsolomon codes. In IEEE Conference on Computational Complexity, pages 77–85, 2011.
 [GV10] Venkatesan Guruswami and Salil Vadhan. A lower bound on list size for list decoding. Information Theory, IEEE Transactions on, 56(11):5681–5688, 2010.
 [GW13] Venkatesan Guruswami and Carol Wang. Linearalgebraic list decoding for variants of reedsolomon codes. IEEE Transactions on Information Theory, 59(6):3257–3268, 2013.
 [GX12] Venkatesan Guruswami and Chaoping Xing. Folded codes from function field towers and improved optimal rate list decoding. In Proceedings of the 44th Symposium on Theory of Computing Conference (STOC), pages 339–350, 2012.
 [GX13] Venkatesan Guruswami and Chaoping Xing. List decoding reedsolomon, algebraicgeometric, and gabidulin subcodes up to the singleton bound. In Proceedings of the 45th ACM Symposium on the Theory of Computing (STOC), pages 843–852, 2013.
 [GX14] Venkatesan Guruswami and Chaoping Xing. Optimal rate list decoding of folded algebraicgeometric codes over constantsized alphabets. In Proceedings of the TwentyFifth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 1858–1866, 2014.
 [IJKW10] Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets, and Avi Wigderson. Uniform direct product theorems: Simplified, optimized, and derandomized. SIAM J. Comput., 39(4):1637–1665, 2010.
 [Kop12] Swastik Kopparty. Listdecoding multiplicity codes. Electronic Colloquium on Computational Complexity (ECCC), 19:44, 2012.
 [PV05] Farzad Parvaresh and Alexander Vardy. Correcting errors beyond the guruswamisudan radius in polynomial time. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 285–294, 2005.
 [Rud07] Atri Rudra. List decoding and property testing of errorcorrecting codes. PhD thesis, University of Washington, 2007.
 [Rud11] Atri Rudra. Limits to list decoding of random codes. IEEE Transactions on Information Theory, 57(3):1398–1408, 2011.
 [RW14] Atri Rudra and Mary Wootters. Every listdecodable code for high noise has abundant nearoptimal rate puncturings. In Proceedings of the 46th annual ACM Symposium on the Theory of Computing (STOC), 2014. To appear.
 [Spi96] Daniel A. Spielman. Lineartime encodable and decodable errorcorrecting codes. IEEE Transactions on Information Theory, 42(6):1723–1731, 1996.
 [Sud00] Madhu Sudan. List decoding: algorithms and applications. SIGACT News, 31(1):16–27, 2000.
 [Tre03] Luca Trevisan. Listdecoding using the xor lemma. In Proceedings of the 44th Symposium on Foundations of Computer Science (FOCS), pages 126–135, 2003.
 [Woo13]