## The “Injectivity Implies Surjectivity” Trick

October 18, 2008

There is a standard proof technique involving the relationship between injectivity and surjectivity on “finite” structures. I rather like it—for the examples provided, it is very difficult to proceed without knowing the trick, even though the solutions are very simple.

Principle A: If $S$ is a finite set and $\phi: S\to S$ is a function, then $\phi$ is injective if and only if $\phi$ is surjective.

Proposition 1: If $\mathbb{F}$ is a finite field and $\phi:\mathbb{F}\to\mathbb{F}$ is a homomorphism of fields, then $\phi$ is an isomorphism.

Proof: $\ker{\phi}$ is an ideal of $\mathbb{F}$, but it cannot equal $\mathbb{F}$, (for $\phi(1)=1$) so $\ker{\phi}=0$ and $\phi$ is injective. Therefore $\phi$ is surjective.

Proposition 2: If $G$ is a finite simple group and $\phi:G\to G$ is a homomorphism, then $\phi$ is an isomorphism or $\phi\equiv e$.

Proof: Either $\ker{\phi}=G$, in which case $\phi\equiv e$, or $\ker{\phi}=\{e\}$, in which case $\phi$ is injective, so $\phi$ is surjective.

Principle B: If $V$ is a finite dimensional vector space and $\phi:V\to V$ is a linear transformation, then $\phi$ is injective if and only if $\phi$ is surjective.

Proposition 3: If $K$ is a field and $A$ is an integral domain containing $K$ with finite dimension over $K$, then $A$ is a field.

Proof: It suffices to show that we can take inverses. Choose any nonzero $a\in A$, and consider the map $\phi: A\to A$ defined by $\phi(x) = ax$. Then $\phi$ is a linear transformation. Since $A$ is an integral domain, $\phi$ is injective, therefore $\phi$ is surjective. So there exists some $b\in A$ with $\phi(b)=1$, so $ab=1$.

Problem: Show that any finite integral domain is a field.

EDIT: As Steven points out in the comments, the following is also useful, and the proof is a good exercise:

Principle C: If $M$ is a Notherian module then any surjective homomorphism $\phi: M\to M$ is injective, and if $M$ is an Artinian module, then any injective homomorphism $\phi:M\to M$ is surjective. (hint: in fact, for any $\phi:M\to M$, if $M$ is Noetherian, then for large enough $n$ we have $\ker{\phi^n} \cap \rm{image}\ \phi^n = 0$. If $M$ is Artinian, then for large enough $n$ we have $\ker{\phi^n} + \rm{image}\ \phi^n = M$.)

## An Easy Fallacy

October 12, 2008

For a change of pace today, I’ll present an incorrect proof of a simple proposition. The diligent reader can work out the location of the mistake before reading on, where I will present a modified proposition, and correct proof, before discussing the mistake in detail.

Proposition: If $k$ is a field and $c$ is a cardinal less than $|k|$, then a vector space over $k$ cannot be written as a union of $c$ of its proper subspaces.

Questionable Proof: Suppose that $\{V_\alpha\}$ is a collection of $c$ proper subspaces of $V$, which cover $V$. We can assume that no proper subcollection covers $V$. (we can remove the superfluous elements from our collection if this is not the case) This implies that for each $\alpha$, there exists some $v_\alpha \in V_\alpha$ such that $v_\alpha \not\in V_\beta$ for $\beta\neq\alpha$.

Take any $\alpha\neq\beta$, and consider the set $\{v_\alpha + \lambda v_\beta\ |\ \lambda \in k\}$. This set has cardinality $k$, and each element lies in some $V_\gamma$, so by the pigeonhole principle, two of these elements must lie in the same $V_\gamma$, say $v_\alpha + \lambda_1 v_\beta \in V_\gamma$ and $v_\alpha + \lambda_2 v_\beta \in V_\gamma$.

Subtracting, we see that $(\lambda_1 - \lambda_2)v_\beta \in V_\gamma$, so $v_\beta \in V_\gamma$. Therefore $\beta = \gamma$. Similarly, we see that $v_\alpha\in V_\gamma$, so $\alpha=\gamma=\beta$, contradiction.

The above proposition is, unfortunately, false. To see this, take $V$ to be a real vector space with countably infinite basis $e_1,e_2,\ldots$, and for each $i$, let $V_i$ be the vector space generated by $e_j$, $j\neq i$. Then $\{V_i\}$ covers $V$, even though it is a set of cardinality $\aleph_0 < |\mathbb{R}|$.

We need to add a condition to the proposition, but this time the proof is correct.

Modified Proposition: If $k$ is a field and $c$ is a cardinal less than $|k|$, then a finite dimensional vector space over $k$ cannot be written as a union of $c$ of its proper subspaces.

Proof: We proceed by induction on the dimension. If $V$ has dimension $1$, then any union of proper subspaces must be the zero space, so the statement is trivial. Now assume that $V$ has dimension $d \geq 2$ and the proposition has been proved for all vector spaces of dimension $d-1$.

First, observe that there are at least $|k|$ subspaces of $V$ of dimension $d-1$. For if $e_1,\ldots e_n$ are a basis for $V$, then we can take the subspaces generated by $\lambda e_1 + e_2, e_3,\ldots e_n$ for $\lambda\in K$.

Suppose that we are given a collection $\{V_\alpha\}$ of proper subspaces of $V$ of cardinality $c$ that covers $V$. Since there are at least $|k|> c$ spaces of dimension $d-1$, there must be such a space $W$ that is not equal to any $V_\alpha$. Hence $\{V_\alpha \cap W\}$ is a collection of proper subspaces of $W$ of cardinality $c$ that covers $W$, contradiction.

So what was the mistake in the original proof? You might question my use of the pigeonhole principle on infinite sets, but that can actually be made quite rigorous. No, the problem is the statement “We can assume that no proper subcollection covers $V$“. Though an attractive idea, it is not true that we can reduce our collection to no longer be redundant.

Consider our earlier counterexample, a real vector space with a countably infinite basis. The given collection covers $V$, but there is no non-redundant subcollection that does so. A slightly painful reminder that we must always modify our intuition when dealing with infinite sets.

We can salvage the original proof, however, when the collection is finite. Then, in fact, we can throw out elements of our collection, one at a time, until we have eliminated any redundancy. That gives us the following: (note that this is not strictly weaker than our modified proposition, since it applies to infinite-dimensional spaces as well)

Salvaged Proposition: If $k$ is an infinite field, a vector space over $k$ cannot be written as a union of finitely many proper subspaces.

## Geometric Complex Analysis: Rouché’s Theorem

October 3, 2008

Complex analysis is one of the most rich and amazing areas of mathematics, for the complex numbers $\mathbb{C}$ possess geometric properties that intertwine with their analytic properties in surprising and beautiful ways. This is hardly a place to discuss the subject in detail, so I will focus on Rouché’s Theorem, with an entertaining application.

Rouché’s Theorem If $U$ is an open disc in $\mathbb{C}$, and $f$ and $g$ are complex-valued differentiable functions defined in some neighborhood of $\overline{U}$, and if $|f+g| < |g|$ on the boundary of $U$, then $f$ and $g$ have the same number of zeros inside $U$, up to multiplicity.

By “multiplicity” here we mean the following: $\alpha$ is a zero of multiplicity 1 of $h$ if $h(\alpha)=0$ but $h'(\alpha)\neq 0$, and a zero of multiplicity two if $h(\alpha)=h'(\alpha)=0$ but $h''(\alpha)\neq 0$, etc.

The Wikipedia article on this theorem has an interesting informal summary: “If a person were to walk a dog on a leash around and around a tree, and the length of the leash is less than the radius of the tree, then the person and the dog go around the tree an equal number of times.” A little bit of an oversimplification perhaps, but an alluring hint at the deep geometric principles at work in this theorem.

An application:

Proposition: For any $n=2,3,\ldots$, the polynomial $p_n(x) = 1 - x - x^2 - \ldots - x^n$ is irreducible.

Proof: Let $f(x) = (1-x)p_n(x) = x^{n+1} - 2x + 1$, and let $g(x) = 2x$. Choose $r < 1$ to be sufficiently close to $1$ that $r^{n+1}+1 < 2r$.

Then, when $|x| = r$, we have $|f(x) + g(x)| = |x^{n+1} + 1| \leq r^{n+1} + 1 \leq 2r = |2x| = |g(x)|$, so $f$ and $g$ satisfy the conditions of Rouché’s Theorem, and have the same number of zeros inside the circle around the origin of radius $r$. Letting $r\to 1^-$, we find that $f$ and $g$ have the same number of roots inside the circle of radius $1$.

But $g$ has exactly one root inside this circle, so $f$, hence $p_n(x)$ must have exactly one root $\alpha$ with $|\alpha| < 1$.

Now, suppose that $\beta$ were a root of $f(x)$ with $|\beta| = 1$. Then $\beta - 1/2 = \beta^{n+1}/2$, so $|\beta - 1/2| = 1/2$. So $\beta$ lies on a circle around $1/2$ of radius $1/2$, as well as on the circle around the origin of radius $1$, so $\beta=1$. Since $p_n(1) = 1-n\neq 0$, $p_n(x)$ has no roots on the unit circle.

Now we are done! For if $p_n(x) = a(x)b(x)$ over the integers, then the constant terms of $a$ and $b$ must be $\pm 1$. But these constant terms are the product of some subset of the roots of $p_n$. If $\alpha$ is a root of $a$, since it is the only root of absolute value less than or equal to one, it follows that $a$ must have every single other root of $p_n$ as a root, and the factorization must be trivial.

For anyone interested in studying this subject in greater detail, I highly recommend Functions of One Complex Variable by John B. Conway. And if you want to know more about numbers like our $\alpha$ above, read up on Pisot numbers.