. 12
( 18)


© 2009 by Taylor and Francis Group, LLC
320 Algorithmic Cryptanalysis

Together with B — and assuming that vectors are represented by rows, Gram-
Schmidt algorithm computes a lower triangular matrix M, with 1s on its
diagonal, such that B = M B — . Equivalently, this means that for any index
j, we have:
b— mj,i b— .
bj = + (10.11)
j i

Gram-Schmidt orthogonalization is given in pseudo-code as Algorithm 10.3.
The algorithm is illustrated in dimension 3 in Figure 10.5.
The output of the Gram-Schmidt algorithm also allows to write the Gram
matrix and the determinant of a lattice in a convenient form, indeed:

B · B = M · B — · B — · M. (10.12)

Moreover, B — · B — is a diagonal matrix with diagonal entries equal to
b— . Since the determinant of the square matrix M is 1, this shows that the
determinant of a lattice can be computed as the product of the norms of the
vectors in any Gram-Schmidt orthogonal basis.

Algorithm 10.3 Gram-Schmidt algorithm
Require: Initial basis of the vector space B = (b1 , b2 , · · · , bn )
Create basis B — and transformation matrix M
for i from 1 to n do
Let b— ←’ bi
for j from 1 to i ’ 1 do
(bi |b— )
Let mi,j ←’ 2

Let b— ←’ b— ’ mi,j b—
i i j
end for
end for
Output B — and M

10.3.2 Lenstra-Lenstra-Lov´sz algorithm
The algorithm of Lenstra-Lenstra-Lov´sz, also called the LLL or L3 algo-
rithm is obtained by combining Gauss reduction in dimension two, together
with Gram-Schmidt orthogonalization. The basic principle consists of apply-
ing Gauss reduction to a sequence of projected sublattices of dimension 2.
More precisely, it considers the lattice generated by the orthogonal projection
of two consecutive vectors bi and bi+1 on the vector subspace generated by the

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 321





Figure 10.5: Computing b3 from b3

previous vectors (b1 , · · · , bi’1 ) and applies one iteration of t-Gauss reduction1
to this projected lattice. In order to also modify the high dimension lattice,
all swap and translation operations are pulled back from the projected space
and applied to the vectors bi and bi+1 themselves. In addition, these vectors
may also be changed by adding to them some vector from the sublattice with
basis (b1 , · · · , bi’1 ). The goal of this operation is to make sure that bi is not

too far from the corresponding projected vector bi . Once again, we illustrate
this in dimension 3 in Figure 10.6
Algorithm 10.4 gives a description of the L3 algorithm, using Algorithm 10.5
as a subroutine for length reduction. This subroutine is called RED within
Algorithm 10.4. In this description, all coordinates in B are integers and all
numbers in B — and M are represented as rationals.
To better understand this algorithm, let us detail the conditions of the
translation and exchange steps within the L3 algorithm. The goal of transla-
tions is to modify the o¬-diagonal coe¬cients in the triangular matrix M and
reduce them in absolute value below 1/2. In other words, these translations

are used to make sure that each bk is not too far from the corresponding bk
and “almost orthogonal” to its predecessors. This is achieved by calling subal-
gorithm RED to translate a vector bi along bj (with j < i). This subalgorithm
also modi¬es M to re¬‚ect this translation. The updates performed on M are

1 Theuse of t-Gauss ensures polynomial time behavior. Note that, the parameter t and the
parameter δ of LLL-reduction are related by δ = 1/t.

© 2009 by Taylor and Francis Group, LLC
322 Algorithmic Cryptanalysis

Algorithm 10.4 LLL algorithm using rationals
Require: Initial basis of an integer lattice B = (b1 , b2 , · · · , bn )
Require: Parameter 1/4 < δ ¤ 1
Compute B — and M using Gram-Schmidt Algorithm 10.3
for i from 1 to n do
Let Li ←’ b— 2 i
end for
Erase B —
Let k ←’ 2
while k ¤ n do
Apply length reduction RED(k, k ’ 1)
if (Lk + m2 k,k’1 Lk’1 ) ≥ δ Lk’1 then
for l from k ’ 2 downto 1 do
Apply length reduction RED(k, l)
end for
Increment k
Let mm ←’ mk,k’1
Let LL ←’ Lk + mm2 Lk’1
Let mk,k’1 ←’ mmLk’1 /LL
Let Lk ←’ Lk’1 Lk /LL
Let Lk’1 ←’ LL
Exchange bk and bk’1
for l from 1 to k ’ 2 do
Let µ ←’ mi,k
Let mi,k ←’ mi,k’1 ’ µmm
Let mi,k’1 ←’ µ + mk,k’1 mi,k
end for
if k > 2 then
Decrement k
end if
end if
end while

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 323





Figure 10.6: An elementary reduction step of L3 in dimension 3

Algorithm 10.5 Length reduction subalgorithm RED(i, j)
Require: Current lattice basis B and matrix M
Require: Integer i and j, where bi is to be reduced by bj
if |mi,j | > 1/2 then
Let r ←’ mi,j
Let bi ←’ bi ’ rbj
mi,j ←’ mi,j ’ r
for w from 1 to j ’ 1 do
Let mi,w ←’ mi,w ’ rmj,w
end for
end if

© 2009 by Taylor and Francis Group, LLC
324 Algorithmic Cryptanalysis

easy to follow and detailed in subalgorithm RED. Note that the orthogonalized
vector b— remains unchanged during a call to RED.
The exchange steps are used to simultaneously reduce the norm of b— and
— —
increase the norm of bk . Note that B itself is not used during the algorithm
and that the knowledge of the squared norms b— denoted by Li in the
algorithm and of the lower diagonal matrix M su¬ces. During exchange
steps, the values of the norms and the matrix M also need to be updated.
Since these updates are more complicated than during translation steps, we
now detail the mathematics behind them.
Given a pair of vectors bi and bi+1 , let us denote by u and v their respective
projections orthogonally the subspace spanned by (b1 , · · · , bi’1 ). In the spe-
cial case i = 1, we have u = b1 and v = b2 . We have the following identities
before the exchange:
(v|u) (v|u)
b— = b—
i+1 = v ’
u, u, mi+1,i = ,
i u2 u2

(bk |b— )
(bk |b— )
∀ k > i + 1 : mk,i = i+1
, mk,i+1 = .
2 2
b— b—
i i+1
and after the exchange:
(v|u) (v|u)
ˆ— = ˆ—
bi+1 = u ’
bi v, v, mi+1,i =
ˆ ,
v2 v2

(bk |ˆ— )
(bk |ˆ— ) bi+1
∀ k > i + 1 : mk,i =
ˆ , mk,i+1 =
ˆ 2.
ˆ— 2 ˆ—
b b
i i+1
Moreover, since the determinant of a lattice is an invariant that does not
depend on the chosen basis, we necessarily have:
ˆ— · ˆ— = b— · b—
2 2 2 2
bi bi+1 (10.15)
i i+1

As a consequence, we derive the following update formulas:
b— b—
2 2
ˆ— ˆ—
= b— —2
2 2
+ m2 2 i i+1
bi i+1,i bi , bi+1 = ,
i+1 ˆ— 2

b— 2
∀k < i : mi+1,k = mi,k ,
ˆ mi+1,i =
ˆ mi+1,i ,
ˆ— 2
mi,k = mi+1,k ,

∀ k > i + 1 : mk,i+1 = mk,i ’ mi+1,i mk,i+1 ,
mk,i = mk,i+1 + mi+1,i mk,i+1 .
ˆ ˆ ˆ
Looking at the conditions enforced during L reductions, it is clear that
this algorithm yields a LLL-reduced basis of the input lattice. In addition, L3
is a polynomial time algorithm and LLL-reduced bases are reasonably good.

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 325

The L3 lattice reduction can also, with a minor modi¬cation, starting from
a family of vectors that generates a lattice, construct a reduced basis for this
lattice. This modi¬ed algorithm simply removes any zero vectors that appear
among the vectors bi during the computations.

Complexity of the L3 algorithm
In order to bound the complexity of the L3 algorithm, we are going to
rely on the notion of determinant described in Section 10.1. Given a basis
(b1 , · · · , bn ) of a lattice L, we can construct a family of sublattices L1 , . . . , Ln ,
where each lattice Li is generated by an initial segment of the given basis of L:
(b1 , · · · , bi ). We let di denote the determinant det(Li ) of the i-th sublattice.
At each step of the L3 algorithm, the values of di for the current basis may
change. More precisely, we can study the possible behavior of these values
during translations and during exchanges. During translations, we see that a
single vector is changed by addition of multiples of previous vectors. Clearly,
this operation does not modify any of the sublattices and only changes the
representations of these lattices. During exchanges, the situation is slightly
more complicated. Assume that bk and bk’1 are exchanged, then we see that
none of the sublattices Li for i < k ’ 1 are modi¬ed. Similarly, none of the
sublattices Li for i ≥ k are modi¬ed, since up to order the basis of each is
preserved. However, the sublattice Lk’1 is modi¬ed. Thus, the determinant
dk’1 can change. Since this determinant is the product of the norms of the
— —
vectors bi up to k ’ 1 and since bk1 decreases by a factor at least δ ’1 , so
does dk’1 . More precisely, if dk’1 denotes the new value after the exchange,
we have:
dk’1 ¤ δ · dk’1 . (10.17)
Thus, during each exchange of the L3 algorithm, one value of di goes down,
while the others are unchanged. During all other operations in L3 , all values
di s are left unchanged. As a consequence, the product of all di s
D= di (10.18)

is decreasing throughout the L3 algorithm. Moreover, whenever it changes,
it is divided by at least by δ ’1 . In addition, D2 is a non-zero integer, thus
D must remain greater than 1. Letting DInit denote the initial value of D,
we see that the L3 algorithm performs at most log(DInit )/ log(t) exchanges.
This allows to bound the number of arithmetic operations performed during
L3 ; however, it does not su¬ce to compute the running time of L3 . We also
need to analyze the size of the numbers that occur throughout the algorithm.
In fact, the real di¬culty is to check that the denominators of fractions remain
small enough. With the original L3 algorithm, they are bounded by the largest
value among the squared determinants d2 . As a consequence, it is possible to
show that, using ordinary multiplication, the complexity of L3 reduction for

© 2009 by Taylor and Francis Group, LLC
326 Algorithmic Cryptanalysis

an integer lattice of rank r in dimension n, with all entries in the basis vectors
bounded by B is O(nr5 log3 (B)).
This complexity can be improved by using ¬‚oating point numbers instead
of rationals, within the L3 algorithm, keeping only the integer coordinates
of basis vectors in exact form. A side e¬ect of this change is that we can
no longer round numbers to their nearest integer without mistakes and that
we need to relax the length reduction condition given by Equation (10.9). It
su¬ces to replace the constant 1/2 in this algorithm by a slightly larger con-
stant, such as 0.501 or something similar. From a theoretical point-of-view the
fastest algorithm is described in [NS05] and has complexity O(nr4 log2 (B)).
In practice, ¬‚oating point numbers with ¬xed precision are often used; how-
ever, in bad cases the round-o¬ errors prevent the algorithm from terminating.
Many heuristics have been proposed to avoid this bad behavior, for example
see [SE94], but, despite their good performances in many cases, they some-
times fail.

Properties of LLL-reduced bases
Given a LLL-reduced basis for a lattice L, we ¬rst remark that any non-zero
vector of L has its norm at least as large as the norm of the shortest vector
b— in the orthogonal basis. Due to Lov´sz conditions, we ¬nd:

2 2
b— ≥ b— δ’ . (10.19)
i+1 i
If »1 denotes the ¬rst minimum of L, it is greater than the norm of the shortest
b— . It follows that:
»1 ≥ δ ’ b1 . (10.20)
As a consequence, the ¬rst vector of a LLL-reduced basis is not too large
when compared to the ¬rst lattice minimum. This relation is often used in
the special case δ = 3/4, yielding:
b1 ¤ 2(n’1)/2 »1 . (10.21)
Another very useful inequality related the size of b1 with the determinant
of the lattice. Since the determinant is equal to the product of the norms of
all b— and multiplying together the equations:
b— ≥ δ’ b1 , (10.22)
we ¬nd:
1 n
det(L) ≥ δ ’ b1 . (10.23)
Taking the n-th root and letting δ = 3/4, we obtain:
b1 ¤ 2(n’1)/4 det(L)1/n . (10.24)

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 327

10.4 Shortest vectors and improved lattice reduction
Once we have computed a LLL-reduced basis for a lattice L, we already have
obtained some reasonably short vectors. However, we might wish to obtain
stronger results. One typical problem is to determine the ¬rst minimum »1 (L)
of the lattice and ¬nd a non-zero vector achieving this minimal length. Note
that we cannot use a straightforward brute force approach to ¬nd such a
shortest vector. Indeed, the set of lattice vectors is in¬nite and we cannot
directly enumerate it. Instead, we ¬rst need to analyze the properties the
lattice and determine a ¬nite set of candidates that contains the shortest
vector. This is the basic idea of enumeration algorithms [Kan83].

10.4.1 Enumeration algorithms for the shortest vector
Assume that we are given a basis B for a lattice L of rank r. Then, for
any r-uples ± of integers, we can compute a vector V (±) in L by the following
V (±) = ± i bi , (10.25)

where the vectors bi are in the basis B. Can we determine a ¬nite set SB
such that the shortest non-zero vector in L is equal to V (±) for some ± in
SB ? Kannan showed in [Kan83] that such a ¬nite set exists and can be used
to construct an algorithm for ¬nding the shortest vector.
As before in this chapter, we denote by B — the Gram-Schmidt orthogo-
nalized basis corresponding to B and by b— the vectors from B — . Let ± by
a non-zero r-uple as above such that ±k is the last non-zero component in
±, i.e., for any integer i > k, we have ±i = 0. Then, we have the following
k k’1
βi b— + ±k b— ,
V (±) = ±i bi = (10.26)
i k
i=1 i=1

for some r ’ 1-uple β. Since B — is an orthogonal basis, this implies that the
norm of V (±) is greater than ±k times the norm of b— . As a consequence, we
know that if ± is such that V (±) is the shortest non-zero vector in L then for
any vector v in L we have:
|±k | ¤ . (10.27)

As a consequence, we have shown that there are only ¬nitely many possibilities
for the last non-zero integer in ±. Note that the quality of bound given by
Equation (10.27) varies depending on the vector v we choose and on the norm
of b— . Using a LLL-reduced basis in this bound is a good idea because it gives

© 2009 by Taylor and Francis Group, LLC
328 Algorithmic Cryptanalysis

two improvements. First, the ¬rst vector in the reduced basis is quite small
and is a good candidate for v. Second, thanks to Equation (10.19), the norm
of b— in a reduced basis is not too large. More precisely, with a L3 reduced
basis, with parameter t, we can derive the following bound for ±k :
|±k | ¤ δ’ (10.28)

It remains to see that the other component in ± can similarly be bounded
when V (±) is the shortest non-zero vector in L. More precisely, substituting
Equation (10.11) into Equation (10.26) and letting βk = ±k we may remark
that for a L3 reduced basis, we have:
βj = ±j + βi mi,j . (10.29)

This implies that:
βi b—
v’ v
|±j + βi mi,j | ¤ ¤ . (10.30)
b— b—
j j

Thus, for a L3 reduced basis, there are fewer than 2 δ ’ 1 values to
consider for ±j .
All in all, starting from a L3 reduced basis, it is possible to obtain the
shortest vector of a lattice of rank r by considering at most 2O(r ) vectors.
In addition to Kannan in [Kan83], this has also been studied by Fincke and
Pohst in [FP85] and, later, by Schnorr and Euchner in [SE94]. A completely
di¬erent algorithm for ¬nding short vectors was proposed by Ajtai, Kumar
and Sivakumar in [AKS01]. It is asymptotically much more e¬cient, but does
not seem to be practically competitive for the current dimensions that can be
addressed with lattice reduction.
Note that a technical di¬culty when programming this algorithm is that
we need to write down r level of embedded loops. Were r ¬xed, this would
cause no problem, but with varying values of r, we have to either write a
recursive routine or remove the recursivity by explicitly remembering a stack
of loop counters inside an array. A basic enumeration algorithm using this
second approach is listed as Algorithm 10.6. One important practical feature
of this algorithm is that whenever a new shortest vector is found, its length
is used to bound the coe¬cients used during the enumeration. Moreover, the
coe¬cients are bounded using the ¬rst inequality of Equation (10.30), which
is tighter than the second one.
In practice, Schnorr and Euchner propose in [SE94] to speed up the enu-
meration algorithm by changing the order in which the various possibilities
for ± are considered. Instead of enumerating the values at each level from the

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 329

Algorithm 10.6 A basic short vector enumeration algorithm
Require: Input L3 reduced basis B of rank r, orthogonalized B — and M
Require: Input round-o¬ error tolerance
Let array ±best [1 · · · r] ←’ (1, 0, 0, · · · , 0)
Let Lbest ←’ b1
Declare array ±[1 · · · r + 1]
Let ±[r + 1] ←’ 0 {De¬ned to avoid memory over¬‚ows}
Let ±[r] ←’ ’ Lbest / b— + r
Declare array L[1 · · · r + 1]
Let L[r + 1] ←’ 0
Let t ←’ r {Initialize loop level index to outer loop}
while t ¤ r do
if t = 0 then
Let c ←’ i=1 ±[i]bi
Let L ←’ c {Recompute exact squared norm}
if L < Lbest then
Let Lbest ←’ L
Let ±best ←’ ±
end if
Increment ±[1]
Let t ←’ 1
Let β ←’ ±[t] + i=t+1 ±[i]mi,t
˜ ˜
Let L[t] ←’ L[t + 1] + β 2 b— {Approximate contribution to norm}
if L[t] < Lbest + then
if t > 1 then
Let β ←’ i=t ±[i]mi,t’1
Let ±[t ’ 1] ←’ ’ (Lbest ’ L[t])/ b— ’β
end if
Decrement t {Go to next inner loop level}
Increment ±[t + 1]
Increment t {Go to next outer loop level}
end if
end if
end while
Output coordinates ±best of shortest vector in basis B.

© 2009 by Taylor and Francis Group, LLC
330 Algorithmic Cryptanalysis

smallest to the largest possibility, they propose to use a centered approach.
For example, at the ¬rst level of the enumeration, i.e., when considering ±[r],
since the enumeration starts from the end, they enumerate the possibilities in
the order 0, 1, ’1, 2, ’2, . . . For inner levels, the enumeration starts from
the middle of the range of available option and proceeds outwards, alternating
on the left and right sides.

10.4.2 Using shortest vectors to improve lattice reduction
Thanks to the above enumeration algorithms for short vector, it is possible
to construct lattice bases which are more strongly reduced than L3 reduced
bases. Moreover, Kannan showed in [Kan83] that the relationship between
short vector and strong lattice reduction runs both ways. In one direction,
we can use short vectors to improve the quality of lattice basis, in the other
direction, the short vector enumeration algorithms run faster when applied to
strongly reduced basis. Using this bidirectional relationship, Kannan devised
a recursive algorithm that constructs strongly reduced basis and short vector
faster than by running the enumeration algorithm directly with a L3 reduced
basis. Recall that for a lattice of rank r, we showed in Section 10.4.1 that
starting from a L3 reduced basis, it su¬ces to enumerate 2O(r ) vectors to
¬nd the shortest lattice vector. With the recursive algorithm of Kannan, the
complexity is lowered to 2O(r log r) .
This algorithm relies on the notion of Hermite-Korkine-Zolotarev or HKZ
reduced basis. A basis B of a lattice L, with orthogonalized basis B — such
that B = M · B — is HKZ reduced, if and only if, the following properties are

1. The basis B is size-reduced, as in Equation (10.9), i.e., all o¬-diagonal
coe¬cients of M satisfy |mi,j | ¤ 1/2.

2. The vector b1 realizes the ¬rst minimum »1 (L).

3. The projection of the vectors b2 , · · · , br orthogonally to b1 form an HKZ
reduced basis.

The third condition is recursive; however, it simply means that b2 is chosen
to minimize b— and so on for the subsequent vectors. It is easy to remark
that an HKZ reduced basis is L3 reduced.
During Kannan™s algorithm, we also need a slightly relaxed notion as an
intermediate computation step. We say that a basis B is quasi-HKZ reduced,
when b1 ¤ 2 b— and the projection of the sublattice (b2 , · · · , br ) orthogo-
nally to b1 is HKZ reduced. The key point is that enumerating the shortest
vector in a lattice given a quasi-HKZ reduced basis is much more e¬cient
than with an L3 reduced basis. A sketch of Kannan™s algorithm is given as
Algorithm 10.7. This algorithm uses L3 on a generating family as subroutine.

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 331

It has been thoroughly analyzed when the underlying L3 uses exact arith-
metic [HS07]. However, in practice, it is usually replaced by ¬‚oating point
heuristics. Due to its high time complexity, Kannan™s algorithm for HKZ
reduction can only be used for lattices of small enough rank.

Algorithm 10.7 Kannan™s HKZ reduction algorithm
Require: Input basis B of a lattice L
Reduce B using L3
Use HKZ reduction recursively on projected lattice (b2 , · · · , br ) orthogo-
naly to b1 .
until b1 > b— {Loop output of quasi-HKZ reduced basis}
Find bmin shortest vector in B using Algorithm 10.6.
Apply L3 on family (bmin , b1 , · · · br ) to obtain a L3 reduced basis with bmin
As ¬rst vector. Put the output in B.
Use HKZ reduction recursively on projected lattice (b2 , · · · , br ) orthogonally
to b1 .
Output HKZ reduced B, including lattice shortest vector. Schnorr™s block reduction

To conclude this chapter on lattice reduction, let us mention that for lattices
of large rank, where HKZ reduction is not feasible, there exists a family of
reduction algorithms introduced by Schnorr in [Sch87] called block Korkine-
Zolotarev reduction. The basic idea is to use HKZ reduction on projected
sublattices of ¬xed rank. Without going into the details, let us mention that
each algorithm in the family has a polynomial running time. Of course, the
degree of this polynomial grows with the size of the considered blocks. Heuris-
tic versions of this algorithm are available in some of the computer packages
cited in the preface.

10.5 Dual and orthogonal lattices
When dealing with lattice reduction in cryptography, the notions of dual
and orthogonal of a given lattice are frequently encountered. Since these two
notions are important and distinct from each other, we describe them in this

© 2009 by Taylor and Francis Group, LLC
332 Algorithmic Cryptanalysis

10.5.1 Dual of a lattice
Let L be a lattice of rank r in a vector space of dimension n. Consider
span(L), the vector subspace spanned by L. We ¬rst de¬ne the dual lattice
L— as a set:
L— = {u ∈ span(L) | ∀ v ∈ L : (u|v) ∈ Z} .
To check that L— is indeed a lattice, we need to prove that it is a discrete
subgroup of Zn . In fact, in this case, it is even a discrete subgroup of span(L).
The subgroup part is left as exercise to the reader. To prove that L— is a
discrete subgroup, we proceed by contradiction. If L— is not discrete, then
there exists a sequence of vectors ui of L— , whose norms tend to 0 as i tends
to in¬nity. Of course, this limit also holds for any equivalent norm. We now
consider (v1 , · · · , vr ) a basis of L and the norm · L de¬ned on span(L) by:
u = (u|vi ) .

Now, by de¬nition of L— , the norm of any vector in the sequence ui is a sum
of squares of integers and, thus, a non-negative integer. As a consequence,
the sequence ui L is a sequence of non-negative integers which tends to zero.
Moreover, since none of the ui is the zero vector, none of the norms ui can
be equal to zero and the norms are lower bounded by 1. This contradiction
concludes the proof.
When the lattice L has full rank, i.e., when r = n a basis for L— can simply
be obtained by the transpose of the inverse matrix of a basis of L. In the
general case, in order to explicitly construct a basis of the dual lattice, the
easiest is to start from a Gram-Schmidt orthogonalization of a basis of the
original lattice. Let us consider (b1 , · · · , br ) a basis of L and (b— , · · · , b— ), its
Gram-Schmidt orthogonalization.
We ¬rst remark that ˆr = b— / b— belongs to the dual lattice L— . Indeed,
b r r
with bi is 0 when i < r and b— when i = r. Thus,
the scalar product of r r
decomposing any vector v in L on the given basis, we ¬nd that the scalar
product of v with ˆr is a integer. In truth, this integer is the coe¬cient of br
in the decomposition of v. Next, we consider the vector ˆ— = b— / b—
br’1 r’1 r’1
and see that, in general, this vector does not belong to L— . Indeed, its scalar
products with all bi s are zeros when i < r ’ 1 and its scalar product with
br’1 is equal to one; however the scalar product with br is not necessarily an
integer. More precisely, using the notation of Section 10.3.1, we have:
(br |b— ) = mr,r’1 b—
r’1 . (10.31)

It follows that:
(br |ˆ— ) = mr,r’1 .
br’1 (10.32)

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 333

Clearly, adding a multiple of b— to ˆ— preserves the scalar product with bi
for i < r and modi¬es the scalar product with br . In particular, if we consider
the vector ˆr’1 = ˆ— ’ mr,r’1ˆr , the scalar product with br becomes 0 and
b br’1 b
we obtain a second, linearly independent, vector in L— .
To construct a complete basis for L— , we ¬rst specify its Gram-Schmidt
orthogonalized basis:

b— b—
(ˆ— , · · · , ˆ— ) = r 1
2,··· ,
br b1 . (10.33)
r 1

In other words, the basis for L— has the same orthogonalized basis as the
initial basis for L, but the order of vectors is reversed. Then, we specify the
Gram-Schmidt coe¬cients for the dual basis. For simplicity, it is preferable
to number the basis vectors of the dual lattice in reversed order and consider
a basis (ˆr , · · · ˆ1 ). The Gram-Schmidt coe¬cients of this basis are chosen as
b b
(ˆi |ˆ— )
b bj
2 = ’mi,j .
Equivalently, we simply compute ˆi as ˆ— ’ j=i+1 mi,j ˆ— .
b bi bj
We leave as an exercise to the reader to check that by applying the same
method to compute a dual of the dual basis gives back the original basis of
the lattice L. In addition, the reader can also verify that when the initial
basis of L is L3 reduced, so is the resulting basis of the dual lattice.

10.5.2 Orthogonal of a lattice
Let L be an integer lattice in a vector space of dimension n, with rank
r < n. We de¬ne the orthogonal lattice L⊥ as the following subset of Zn :
L⊥ = {u ∈ Zn | ∀ v ∈ L : (u|v) = 0} .
It is clear that L⊥ is a subgroup of Zn and thus an integer lattice. Its
rank is equal to n ’ r. Moreover, lattice reduction and, in particular, the L3
algorithm gives an e¬cient way to compute L⊥ from L. Consider the lattice
spanned by the columns of the following matrix:
k · B(L)

where B(L) is the matrix of an L3 reduced basis of L, k is a large enough
constant and I is an n — n identity matrix. After applying the L3 algorithm
to this lattice, we obtain a matrix of the form:

B(L ) V

© 2009 by Taylor and Francis Group, LLC
334 Algorithmic Cryptanalysis

where B(L⊥ ) is a L3 reduced basis of the orthogonal lattice L⊥ . This notion
of orthogonal lattice is frequently used in Chapter 13. Note that it is usually
more e¬cient to use linear algebra rather than lattice reduction to construct
a basis of the vector space spanned by B(L⊥ ). Moreover, by using a linear al-
gebra method that does not perform divisions, i.e., similar to Hermite normal
form computations, we can in fact obtain a basis B(L⊥ ). This is essential for
the e¬ciency of the algorithms presented in Chapter 13.

© 2009 by Taylor and Francis Group, LLC
Lattice reduction 335

1. Let A and N be two coprime integers. Consider the lattice L generated
by the rows of:

• Let (u, v) be any vector in this lattice with v = 0. What can you
say about u/v?

• What is the determinant of the lattice L?
• If (u, v) is the smallest vector in L, what can you say about u/v?
• Find a similar result using the continued fraction expansion of
A/N .

2h . Prove that the second vector in a Gauss reduced basis for a 2-dimensional
lattice achieves the second minimum of this lattice.
3. Consider the n dimensional lattice L generated by the identity matrix.

• What are the minima of this lattice?

• Write a program that generates random looking bases of L.
• How can you easily check that a matrix M is a basis for L?
• Experiment with an existing implementation of lattice reduction
algorithms and try to reduce random looking bases of L. When
does the lattice reduction algorithm output the identity matrix (or
a permuted copy)?

4h . Show that any lattice reduction algorithm can be modi¬ed to keep track
of the unimodular transform that relates the input and output matrices.

5. Reduce, by hand, with the L3 algorithm the lattice generated by:
« 
12 3
4 5 6 .
7 8 10

© 2009 by Taylor and Francis Group, LLC
Chapter 11
Polynomial systems and Gr¨bner
base computations

Thus, if we could show that solving a certain system requires at
least as much work as solving a system of simultaneous equations
in a large number of unknowns, of a complex type, then we would
have a lower bound of sorts for the work characteristic.
Claude Shannon

As shown by the above quote, written by Claude Shannon in his famous
paper [Sha49], multivariate polynomial systems of equations naturally arise
when writing down cryptanalytic problems. To illustrate this fact, let us
consider two simple examples, one coming from public key cryptography and
the other from secret key cryptography.

Factoring Writing down factoring as a multivariate system of polynomial
equations is a very simple process. Assume that N = P Q is an RSA number.
Writing P and Q in binary, we can de¬ne two sets p and q of unknowns over
F2 according to the following correspondence:

pi 2i , qi 2i .
P= and Q =
i i

Replacing P and Q in the equation N = P Q yields a multivariate polynomial
equation over the integers. Finding the factorization of N corresponds to
¬nding a {0, 1} non-trivial solution to this equation.
In fact, it is possible to go further and rewrite this equation as a system
of equations over F2 . In order to get a reasonably simple expression, it is a
good idea to add extra variables to keep track of the computation. To give a
simple example, assume that we would like to factor 35 into two numbers of
3 bits each: P = 4p2 + 2p1 + p0 and Q = 4q2 + 2q1 + q0 . This easily yields an

16p2 q2 + 8(p1 q2 + p2 q1 ) + 4(p0 q2 + p1 q1 + p2 q0 ) + 2(p0 q1 + p1 q0 ) + p0 q0 = 35.

© 2009 by Taylor and Francis Group, LLC
338 Algorithmic Cryptanalysis

To replace this equation with a system of equations over F2 , we de¬ne a
partial sum S = p0 Q + 2p1 Q and see that N = S + 4p2 Q. The partial sum
S can be represented as 16s4 + 8s3 + 4s2 + 2s1 + s0 . Adding two new sets of
variables c and c to deal with the carries in both additions, we ¬nally obtain:

s0 = p0 q0 ,
= p0 q1 • p1 q0
s1 c1 = p0 p1 q0 q1 ,
= c1 • p0 q2 • p1 q1 , c2 = c1 p0 q2 • c1 p1 q1 • p0 p1 q1 q2 ,
= c2 • p1 q2 ,
s3 c3 = c2 p1 q2 ,
s4 = c3 ;
1 = s0 , 1 = s1 ,
0 = s2 • p2 q0 , c2 = s2 p2 q0 ,
0 = s3 • c2 • p2 q1 , c3 = s3 c 2 • s3 p2 q1 • c 2p2 q1 ,
0 = s4 • c3 • p2 q2 , 1 = s4 c 3 • s4 p2 q2 • c 3p2 q3 .

This is a polynomial system of multivariate equations of degree 4. This can
be generalized to express the factorization of any RSA number N as a system
of O(log(N )2 ) equations of degree 4 over F2 .

Cryptanalysis of DES We saw in Chapter 5 the bitslice description of the
DES algorithm. In this description, the S-boxes are given as logical formulas
or equivalently as polynomial expressions over F2 . Since the other compo-
nents of the DES algorithm are linear, the DES can in fact be written as a
sequence of polynomial evaluations. Thus, by substituting the plaintext and
ciphertext values he knows into these evaluations, a cryptanalyst may de-
scribe the key recovery problem of DES as a polynomial system of equations.
In order to limit the total degree of the system and preserve its sparsity, it is
convenient to introduce additional variables that represent the intermediate
values encountered in this Feistel scheme.
From these two simple examples, we see that multivariate polynomial sys-
tems of equations can be used very easily to describe many cryptanalytic
tasks. As a consequence, it is important to study whether such multivariate
systems of equations are easy to solve and to understand the key parameters
that govern the complexity of solving such systems.

11.1 General framework
Throughout the rest of this chapter, K denotes an arbitrary ¬eld and K its
algebraic closure. As a ¬nal goal in this chapter, we would like to formalize

© 2009 by Taylor and Francis Group, LLC
Polynomial systems and Gr¨bner base computations
o 339

the resolution of a system of m algebraic equations in n unknowns:
± 
 f1 (x1 , · · · , xn ) = 0 
 
. (11.3)
 
fm (x1 , · · · , xn ) = 0
 

More precisely, we would like to determine if such a system has got solutions
and if so, whether there are ¬nitely or in¬nitely many. Finally, we would like to
compute some solutions and, when the number of solutions is ¬nite, possibly
list all solutions. We already know that, even when considering univariate
polynomials, the notion of roots of polynomials is a delicate one. Indeed, over
the real ¬eld R, similar polynomials may have distinct behaviors, for example,
x2 ’ 1 has two roots, while x2 + 1 has none. Thankfully, this ¬rst di¬culty
vanishes when the polynomials are considered over the algebraic closure of the
real ¬eld, i.e., the ¬eld C of complex numbers. There, both polynomials have
two roots. For this reason, we systematically consider roots of an algebraic
system as in Equation (11.3) as de¬ned over the algebraic closure K.
Looking at our system of equations, intuition says that we should expect
a ¬nite number of solutions when the number of equations is equal to the
number of unknowns, m = n. With more unknowns than equations, there will
be degrees of freedom left and we expect an in¬nite number of solutions. With
more equations than unknowns, we expect that the equations are incompatible
and that no solutions exist. Of course, we already know from the simple case
of linear systems of equations that this is not always true. For example, the
following system of 2 linear equations in three unknowns has no solution:

x1 + x2 + x3 = 0
. (11.4)
x1 + x2 + x3 = 1

To also give a counter-example for the other case, let us consider the following
system of 3 linear equations in two unknowns:
± 
 x1 + x2 = 0 
x1 = 1 , (11.5)
x2 = ’1
 

this system clearly has a solution. In the case of linear systems, the algorithms
from Chapter 3 can solve all these issues. With general systems, the situation
is more complicated and we need to introduce some additional mathematical
notions. Comparing the systems given by Equations (11.3) and (11.4), we
¬rst see a di¬erence in the way of writing the equations. With linear systems,
it is convenient to have a linear combination of the unknowns on the left-hand
side of each equation and a constant on the right-hand side. With algebraic
systems, it is often considered preferable to shift the constants to the left-hand
sides and incorporate them in the polynomials f1 , . . . , fm .

© 2009 by Taylor and Francis Group, LLC
340 Algorithmic Cryptanalysis

11.2 Bivariate systems of equations
Since the issue of solving general systems of equations is quite complicated
and since we already know how to solve univariate polynomial equations,
we start by considering the reasonably simple case of bivariate systems of
equations before turning to the general case.
First, if we are given a single polynomial f (x, y) in two unknowns, we can
prove that this polynomial has a in¬nite number of roots over the algebraic
closure K. For this purpose, let us write:
f (i) (y)xi .
f (x, y) = (11.6)

Furthermore, let g(y) denote the greatest common divisor of (f (1) , · · · , f (dx ) ),
omitting f (0) . This polynomial g has a ¬nite number of roots, possibly zero.
For each root ν of g, we see that f (x, ν) is a constant polynomial and thus does
not have any root, unless f (0) (ν) = 0. For any value ν with g(ν) = 0, f (x, ν)
is a non-constant polynomial and its number of roots (with multiplicities)
is equal to its degree. Since K is always an in¬nite ¬eld, f has necessarily
in¬nitely many solutions: at least one for all possible values of y but the
¬nitely many roots of g.
After considering a single polynomial, the next step is to look at a system
of two algebraic equations, f1 (x, y) = f2 (x, y) = 0. Proceeding as above, we
can write:
d1,x d2,x
(i) (i)
f1 (y)xi f2 (y)xi ,
f1 (x, y) = and f2 (x, y) = (11.7)
i=0 i=0

thus viewing f1 and f2 as polynomials in K[y][x]. With this view, it is natural
to compute the GCD of f1 and f2 using Euclid™s algorithm for univariate
polynomial in x. Note that we need to be careful because K[y] is a ring and
not a ¬eld. To avoid this complication, one option is to allow rational fractions
from K(y) as coe¬cients of polynomial in x during the GCD computations.
At the end of the computation, we obtain a polynomial in x with coe¬cients
in K(y). Two cases are possible, either this polynomial is of degree zero in x
or it is of higher degree. When the degree is positive, we learn that f1 and
f2 are both multiples of another polynomial in x and y. Thus, we are back
to the case of a single polynomial. When the degree in x is 0, the GCD with
coe¬cient in K(y) is 1. Thus, at ¬rst, one may expect that the system has
no roots. However, at that point, we need to clear the denominators in the
relation and obtain:

µ(y)f1 (x, y) + ν(y)f2 (x, y) = g(y). (11.8)

© 2009 by Taylor and Francis Group, LLC
Polynomial systems and Gr¨bner base computations
o 341

Since, at this point, g no longer contains the variable x, we say that we have
eliminated x from the algebraic system of equations.
An example of this approach is given in Figure 11.1. This example is quite
simple; however, in general, due to the need to clear denominators, using
GCD computations for the purpose of elimination is somewhat cumbersome.
Thankfully, there exists a di¬erent approach which always yields the correct
result and simply requires to compute the determinant of a square matrix
whose coe¬cients are univariate polynomials. This approach is the computa-
tion of resultants and is addressed in the next section.

11.2.1 Resultants of univariate polynomials
To understand how resultants work, we start with univariate polynomials.
Let g1 (x) and g2 (x) be two monic polynomials in K[x], we de¬ne the resultant
Res(g1 , g2 ) as:

Res(g1 , g2 ) = g2 (±). (11.9)
± root in K of g1
(with multiplicity)

Splitting g2 into a product of linear factor over K we see that:

(± ’ β).
Res(g1 , g2 ) = (11.10)
¯ ¯
± root in K of g1 β root in K of g2
(with multiplicity) (with multiplicity)

As a consequence, exchanging the two products and replacing β ’ ± by ± ’ β,
we ¬nd that:

Res(g1 , g2 ) = (’1)deg(g1 ) deg(g2 ) Res(g2 , g1 ). (11.11)

Moreover, Res(g1 , g2 ) is 0 if and only if g1 and g2 share at least one common
A very important fact is that computing the resultant of g1 and g2 does
not require to compute the roots of either polynomial. Instead, it su¬ces to
construct a simple matrix formed from the coe¬cients of both polynomials
and to compute its determinant using classical linear algebra techniques. If
d1 is the degree of g1 and d2 the degree of g2 , we construct a square matrix
of dimension d1 + d2 . Writing:

d1 d2
(i) (i)
g1 xi g2 xi ,
g1 (x) = and g2 (x) = (11.12)
i=0 i=0

© 2009 by Taylor and Francis Group, LLC
342 Algorithmic Cryptanalysis

As a sample system of bivariate equations, let us de¬ne:

f1 (x, y) = y · x3 + (2y 2 + y) · x2 + (y 3 + y 2 + 1) · x + (y + 1) and
f2 (x, y) = y · x4 + 2 y 2 · x3 + (y 3 + 1) · x2 + y · x.

Dividing f1 and f2 by y, we obtain unitary polynomials g1 and g2 in x whose
coe¬cients are rational fractions in y. Using Euclid™s GCD Algorithm 2.1
produces the following sequence of polynomials of decreasing degree in x:
g3 (x, y) = (g2 ’ (x ’ 1)g1 )/(y + 1) = x2 + y · x + ,
g4 (x, y) = g1 ’ (x + y + 1) · g3 = 0.

As a consequence, f1 and f2 are both multiples of g3 . It is even possible to
clear the denominator and write:

f1 (x, y) = (x + y + 1) · (y x2 + y 2 x + 1) and
f2 (x, y) = (x2 + y x) · (y x2 + y 2 x + 1).

Thus, solutions of the original system are either roots of y x2 + y 2 x + 1 or
solutions of the lower degree system given by:

h1 (x, y) = x + y + 1 and h2 (x, y) = x2 + y x.

Applying Euclid™s algorithm once more, we ¬nd that:

h3 (x, y) = h2 ’ (x ’ 1)h1 = y + 1.

Thus, any solution to this system of equation has y = ’1. Substituting
this value back into h1 and h2 , we ¬nd that in addition x = 0. Since
(x, y) = (0, ’1) is not a root of y x2 + y 2 x + 1, this illustrates the fact that
in the general case, the set of roots of algebraic systems of equations can
be decomposed into a union of simpler sets. The reader interested in the
theory behind these decompositions should look up the notion of primary
decomposition in a textbook about algebraic geometry, such as [Eis95].
For further characterization of the roots of y x2 + y 2 x + 1, it is useful to
remark that this polynomial is symmetric in x and y. As a consequence,
when (x0 , y0 ) is a root, so is (y0 , x0 ). This polynomial can be rewritten as
sp+1, where s = x+y and p = xy are the elementary symmetric polynomials
in two variables. Note that there is a bijection between sets (x0 , y0 ), (y0 , x0 )
and pairs (s0 , p0 ). In the forward direction, one computes the sum and
product. In the backward direction, one solves the equation X 2 ’ s0 X + p0
whose roots are x0 and y0 .

Figure 11.1: A sample algebraic system in two unknowns

© 2009 by Taylor and Francis Group, LLC
Polynomial systems and Gr¨bner base computations
o 343

we construct the matrix:
« (d ) (d ’1) (d ’2) (0)

g1 1 g1 1 g1 1 ··· 0 ··· 0
g1 0
(d ’1)
(d ) (1) (0)
g1 1 g1 1 ··· 0 ··· 0 ·
¬0 g1 g1
¬ ·
¬. .·
.. .. .. .. ..
. ··· . ·
. . ··· . .
¬. .
(i) (0) ·
··· ··· ··· g1 · · · g1 ·
¬0 0 0
MS (g1 , g2 ) = ¬ (d2 ) (d2 ’1) (11.13)
(d2 ’2) (0)
··· 0 ··· 0 ·
¬ 2 g2
¬g g2 g2 0 ·
(d ’1)
(d ) (1) (0)
g2 2 g2 2 ··· 0 ··· 0 ·
¬0 g2 g2
¬ ·
¬. .·
.. .. .. .. ..
¬ ·
. .
. . ··· . . . ··· .
(j) (0)
··· ··· ··· g2 · · · g2
0 0 0

This matrix, called the Sylvester matrix of g1 and g2 can be interpreted in the
following way: each column corresponds to some power of x, 1 for the ¬rst
column, x for the second and so on, up to xd1 +d2 ’1 for the last column. Each
row corresponds to a polynomial, the ¬rst d2 rows are given by g1 , xg1 , . . . ,
xd2 ’1 g1 , the next d1 rows are given by g2 , xg2 , . . . , xd1 ’1 g2 . The resultant of
g1 and g2 is obtained as the determinant of the Sylvester matrix:

Res(g1 , g2 ) = det(Ms (g1 , g2 )). (11.14)

The proof of this important result is out of the scope of this book and can,
for example, be found in [Lan05, Chapter IV]. Note that using the matrix
of Sylvester, the resultant is also de¬ned for non-monic polynomials. In this
case, we have:
d2 d1
(d1 ) (d2 )
Res(g1 , g2 ) = g1 g2 (±’β).
¯ ¯
± root in K of g1 β root in K of g2
(with multiplicity) (with multiplicity)

Prime factors of the resultant When g1 and g2 are monic polynomials
with integer coe¬cients, each prime p that divides the resultant is such that
g1 and g2 have at least a common root on the algebraic closure of Fp . Note,
however, that when a larger power of a prime, say pk , divides the resultant,
it does not imply that the polynomials have a common root modulo pk .

11.2.2 Application of resultants to bivariate systems
When g1 and g2 are bivariate polynomials in x and y, we can also use
resultants to solve the corresponding system of equations. To do this, we need,
as when considering GCDs, to view these polynomials as polynomials in x
whose coe¬cients are polynomials in y, i.e., polynomials in K[y][x]. With this
view, Sylvester™s matrix, given by Equation (11.13), is ¬lled with univariate

© 2009 by Taylor and Francis Group, LLC
344 Algorithmic Cryptanalysis

polynomials in y and its determinant is another polynomial in y. In order to
specify which variable is eliminated from the system of equations during the
computation of the resultant, we say that we have computed the resultant of
g1 and g2 in the variable x. This resultant is a polynomial in y. Three cases
• If the resultant is the zero polynomial, the two polynomials g1 and g2
have a common factor. Using the example of the polynomials f1 and f2
from Figure 11.1, we may check that their Sylvester™s matrix (in x) is:
y 2y 2 + y y 3 + y 2 + 1
« 
y+1 0 0 0
2y 2 + y y 3 + y 2 + 1
¬0 y y+1 0 0·
¬ ·
2y 2 + y y 3 + y 2 + 1
¬0 0 y y+1 0·
¬ ·
2y 2 + y y 3 + y 2 + 1 y + 1 ·
¬0 0 0 y
¬ ·
¬ y 2y 2 y3 + 1 y 0 0 0·
¬ ·
2y 2 y3 + 1
0 y y 0 0
2y 2 y3 + 1
0 0 y y 0
Moreover, the determinant of this matrix is zero. Indeed, multiplying
MS (f1 , f2 ) on the left by the vector (0, ’1, ’y, 0, 0, 1, y + 1) yields the
null vector.
• If the resultant is a non-zero constant polynomial, we learn that the
system of equations is incompatible over the ¬eld of complex numbers.
However, assuming that f1 and f2 are monic, then modulo prime divisors
of the resultant, they have a common factor. If f1 and f2 are not monic,
the conclusion only holds for prime factors of the resultant which divide
none of the leading coe¬cients.
• If the resultant is a polynomial of higher degree, say R(y) , then the y
coordinate of any solutions of the system of equations given by f1 and
f2 is a root of R. However, the converse is not true. Using resultants with more variables
In theory, using resultants with more variables allows to solve arbitrary
systems of equations. However, with n polynomials in n unknowns, we see
that the computation quickly becomes di¬cult. A ¬rst problem, we have
too many possible choices, we can choose to eliminate any of the n variables
for any of the n(n ’ 1)/2 pairs of polynomials. Moreover, even if we decide
to eliminate a speci¬c variable, it is unclear whether we should compute the
resultant of all pairs of polynomials or only keep a fraction of them. With
the next variable to eliminate, we have even more options and do not know
how to proceed. A second problem is that the degrees of the polynomials we
construct during computations of resultants grow too quickly.
As a consequence, resultants can only be used to solve systems of polyno-
mial equations with a reasonably small number of unknowns and a di¬erent
approach is needed to address more general cases.

© 2009 by Taylor and Francis Group, LLC
Polynomial systems and Gr¨bner base computations
o 345

11.3 De¬nitions: Multivariate ideals, monomial order-
ings and Gr¨bner bases
We now turn to the general case of multivariate systems of equations as in
system of equations (11.3). Clearly, any root of this system is also a root of
all polynomials in the ideal I generated by f1 , . . . , fm . Recall that this ideal
is given by:
hi fi | ∀(h1 , · · · , hm ) ∈ K[x1 , · · · , xn ]m
I = (f1 , · · · , fm ) = . (11.16)

Given such an ideal I, we de¬ne the a¬ne algebraic variety of I, as the set
V(I) of the common roots of all polynomials in I over the algebraic closure K :

V(I) = {(±1 , · · · , ±n ) | ∀ f ∈ I : f (±1 , · · · , ±n ) = 0} . (11.17)

Clearly, V(I) does not depend on the exact choice of generating polynomials
f1 , . . . , fm , but only on the ideal I. This is going to be a key property
for solving algebraic systems. In fact, the goal of the present chapter can
informally be rephrased into: Given a set of generating polynomials for an
ideal I, ¬nd a new set of generating polynomials better suited to determine
V(I) and its properties.
Conversely, we can ask whether I is uniquely determined by V(I). More
precisely, given a set of points V , let us de¬ne:

I(V ) = {f ∈ K[x1 , · · · , xn ] | ∀ (±1 , · · · , ±n ) ∈ V : f (±1 , · · · , ±n ) = 0} .
We see that I(V ) is an ideal and that I(V(I)) ‚ I. However, equality does
not hold in general. To give an example with a single variable, we see that
I(V((x2 ))) = (x). Indeed, the only (double) root of x2 is 0 and it is also a
root of x. The generalization of this example relies on the de¬nition of the

radical of an ideal. Given an ideal I, we de¬ne its radical I as:

I = f ∈ K[x1 , · · · , xn ]|∃t ∈ N; f t ∈ I . (11.19)

A famous theorem of Hilbert, called the Nullstellensatz theorem relates I(V(I))

and I:

If K is an algebraically closed ¬eld and I an ideal of K[x1 , · · · , xn ], then:

I(V(I)) = I.

PROOF See [CLO07, Chapter 4].

© 2009 by Taylor and Francis Group, LLC
346 Algorithmic Cryptanalysis

11.3.1 A simple example: Monomial ideals
We recall that a monomial in a polynomial ring K[x1 , · · · , xn ] is a polyno-
mial of the form »x±1 · · · x±n . For conciseness, we denote it by »X ± , where
± = (±1 , · · · , ±n ). We say that an ideal I is a monomial ideal if I can be
(1) (m)
generated by a family of monomials, i.e., I = (X ± , · · · , X ± ). Note that
since K is a ¬eld, the constants appearing in the monomials can be removed
after multiplication by their respective inverses.
Monomial ideals are easy to manipulate. First, a polynomial f belongs to a
(1) (m)
monomial ideal I = (X ± , · · · , X ± ), if and only if all the monomials that
occur with a non-zero coe¬cient in f also belong to I. Moreover, a monomial
»X β , with » = 0 is in I, if and only if, at least one monomial X ± divides
X β . Furthermore, testing divisibility for monomials is very simple, indeed,
X ± divides X β if and only if, for all 1 ¤ i ¤ n, we have ±i ¤ βi .

11.3.2 General case: Gr¨bner bases
Since the case of monomial ideals is very simple, it is useful to link the gen-
eral case with this special case. For this purpose, we introduce order relations
between monomials. We call a monomial ordering, any total order relation
on the set of monomials of K[x1 , · · · , xn ] which satis¬es the following proper-

• Compatibility For any triple of monomials (X ± , X β , X γ ) :

X± X β implies X ± X γ XβXγ.

• Well-ordering Any set S of monomials contains a smallest element
with respect to , this element is denoted min (S).

By abuse of notation, when X ± X β , we may also write ± β. With this
convention, the compatibility property becomes:

• Compatibility For any triple (±, β, γ) :

± β implies ± + γ β + γ.

To better understand the well-ordering property, let us introduce two order
relations on monomials:

• The lexicographic ordering lex is de¬ned by: ± lex β, if and only if
there exists an index 1 ¤ i0 ¤ n such that:

“ For all i < i0 , ±i = βi .

© 2009 by Taylor and Francis Group, LLC
Polynomial systems and Gr¨bner base computations
o 347

“ And ±i0 > βi0 .

• The reverse lexicographic1 order relation revlex is de¬ned by: ± revlex
β, if and only if there exists an index 1 ¤ i0 ¤ n such that:


. 12
( 18)