Helpful Information: Definitions, Explanations, Etc.

Some definitions (examples are listed with each definition):

field--a set with two binary operations, multiplication and addition, such that both are associative and commutative, the multiplication is distributive over addtion, a multiplicative identity called 1 exists, an additive identity called 0 exists, every element has an additive inverse, and every element except for 0 has a multiplicative inverse. In other words:

For those familiar with rings and Abelian groups, a field may also be defined as a commutative ring with a multiplicative identity in which every nonzero element is invertible, or as being an Abelian group under addition, and under multiplication, excluding the element 0 with multiplication being distributive over addition.

irreducible polynomial--an irreducible polynomial, p, has no non-trivial factors of positive degree lower than itself, which means that the only factors of a degree lower than p's are constants (in the case of F2[x] the only constant is 1). This is very important, as irreducible polynomials play a role similar to that of the prime numbers in the integers. So every polynomial in a field is either irreducible itself or can be written uniquely as a product of irreducible polynomials, up to multiplication by constants.


relatively prime--two polynomials are relatively prime if they have no non-trivial (non-constant) factors in common, much like two numbers being relatively prime implies the only common factor is 1.


the notation F2[x]/p--when p is an irreducible polynomial, this produces a new field of degree 2n, where n is the degree of p. Essentially, p becomes equal to 0, and all operations on other members of F2 are then defined by this relationship of p = 0. For example xn+1 = xn+1 + p * x.


Euclidean algorithm--this algorithm is used to find the gcd (greatest common divisor) of two elements, and, alternatively, to find the inverse of an element. It works as follows:

Take two polynomials, f and g. Now divide f by g, producing a remainder, r1. Now divide g by r1 and get a second remainder, r2. Now divide r1 by r2 and get a third remainder, r3, and so forth, until one ultimately reaches a remainder of 0 (this point must occur, but for a proof, the reader may consult any number theory textbook). The last non-zero remainder will be the gcd of f and g.

Now, in order to find the inverse of a polynomial, one takes f and g to be the polynomial for which one wishes to find an inverse and the irreducible polynomial from F2[x]/<irr>, respectively, and uses the results of this repeated dividing to find the inverse. Because g is irreducible, the gcd of f and g is 1 (or any constant, but since we are working over F2, the only constant is 1), and the Euclidean algorithm tells us there exist two polynomials a and b such that af + bg = 1, and helps us to find these polynomials, by keeping track of all the multiplications and divisions along the way. At each step in the division, there will be some ai and bi such that ai * f + bi * g = ri, until finally ri=1.

So we begin by dividing f by g, producing a remainder, r1 and a quotient, q1. Then f = g * q1 + r1, and so r1 = 1 * f - q1 * g. In other words, a1 = 1 and b1 = -q1. Now divide g by r1 (continuing as specified above) to get a second remainder and quotient. Now we have g = r1 * q2 + r2, and so r2 = g - q2 * r1 = g - q2 * (1 * f - q1 * g) (from above), and so after gathering terms, r2 = -q2 * f + (1 + q2 * q1) * g. Thus, a2 = -q2 and b2 = 1 + q2 * q1. Now we simply keep going in this fashion, updating ai and bi at each term by substituting in the previous values, until we finally reach ri = 1. When that point is reached, we have ai * f + bi * g = 1, but since in the field we are considering, f is equivalent to 0, we really have bi * g = 1, so that bi is the inverse of g.

Here is an example:

Take f=x2 + 1 and g=x4 + x + 1 (g is irreducible). Then f * (x2 + 1) + x = g, so r1 = x, a1 = x2 + 1, and b1 = 1. Next, divide f by r1, so that r1 * x + 1 = f. Then r2 = 1, a2 = x(x2 + 1) + 1 = x3 + x + 1, and b2 = x. But since g = 0 in the field F2[x]/<x4 + x + 1>, we really just have a2 * f = 1, so a2 is the inverse of f.


repeated squaring method--this algorithm speeds up computation of large powers enormously by squaring squares instead of simply multiplying a polynomial (or number) by itself many times. Far fewer computations must be performed, because each time the power will be doubled, instead of merely incremented by 1. The number of computations performed this way is on the order of log n, as opposed to being on the order of n. This can make a big difference for very large values of n.

One proceeds in the following fashion. An element, x (a number, a polynomial, whatever... it doesn't matter), is raised to the n'th power, where n>0. If n=1, then x is the answer and the computation is done. If n=2, then square x and return the answer. Otherwise, square x and write down x2. Call the current power computed a, so now a=2. Now keep squaring until a equals the greatest power of 2 less than n (call this b), keeping track of each resulting xa along the way. As an example, if n=34 b would be 32. Now go back and look at each result in reverse order, starting at xb/2. If a + b/2 is less than n, multiply xa and xb/2. If not, proceed to xb/4, and do the same thing until eventually a = n.

Now, for an easy example in the integers, take 320. Instead of calculating 3*3, then 9*3, then 27*3, and so forth, one may calculate it so:
3*3=32=9
9*9=34=81
81*81=38=6561
6561*6561=316=43046721
Now, we don't want 332; that's too far. But we know the lower powers of 3, so we use them.
316 * 34=320=43046721*81=3486784401

That was much faster than multiplying it all out 3 by 3 by 3.


primitive root--a primitive root of a field is an element which solves the equation xn - 1, such that xn = 1, but xm doesn't equal 1 in the field for any m less than n. For the case interesting to us, n is equal to the number of non-zero elements in F2[x]/<p>, where p is an irreducible polynomial. All the roots of p satisfy the first condition, but not all of them satisfy the second condition. The roots of the equation aren't so hard to find, but finding the primitive roots takes more effort and computation. Primitive roots are important in that they generate, that is, all of their powers will be equal to, all the non-zero elements in the field.


Links:
Introduction
Specifics About the Project
Index to the Java Programs
The Polynomial Calculator