1 Introduction
Oct. 14, 2025
We will start with a relatively long introductory chapter, in order to …- provide some motivation for the (partly more technical) content that will come later, 
- give those participants who were not in the Algebra 2 class last term a little more time to brush up their commutative algebra knowledge: - (Prime) ideals, quotients 
- localization (with respect to a multiplicative subset; in particular with respect to one element and localization at a prime ideal), 
- spectrum of a ring, Zariski topology (this we will redo in the class, but ideally you are already a little familiar with the notion of topological space. 
 
I will try to address the question What is algebraic geometry?, and at the same time give, towards the end of the chapter, a rough survey of this class.
In one sentence: Study “geometric properties” of solution sets of systems of polynomial equations (over a field, or more generally a commutative ring).
| Comparison with Previous/Other Courses | |
| Linear Algebra | Systems of linear equations | 
| Algebra | polynomial equations (\(1\) variable, \(1\) polynomial) | 
| Algebraic Geometry | systems of polynomial equations | 
| Algebraic Number Theory | …coefficients/solutions in \(\mathbb {Z}, \mathbb {Q}, K/\mathbb {Q}\text{ fin.}, \mathbb {F}_q\) | 
Here algebraic refers to the fact that we
- study solution sets (zero sets) of polynomials (not power series, differential/holomorphic functions, etc.), 
- use algebraic methods (specifically commutative algebra) to study these objects. 
In particular, at least in principle, we may hence work over an arbitrary field (not only \(\mathbb R\) or \(\mathbb C\)).
We want to look at this result from the perspective of an algebraic geometer, i.e., we view \(M_n(k)\) as \(n\)-dimensional (vector) space.
Let us consider the case \(k = \mathbb {R}\), \(n=2\) and restrict to matrices \(A\) with trace \({\rm tr}(A) = 0\). (This does not change the main argument, but simplifies the discussion a little bit and will allow us to draw a picture later.)
We want to use that the theorem is obviously true, if \(A\) is a diagonal matrix. From this, it follows easily that the theorem holds whenever \(A\) is diagonalizable. In fact, if \(A = SDS^{-1}\) for a diagonal matrix \(D\), then \(\operatorname{charpol}_A = \operatorname{charpol}_D\). Since conjugation is a ring automorphism of the ring of matrices (over any ring), we may “pull it out” of any polynomial. Together we obtain
and the term on the right vanishes, since \(\operatorname{charpol}_D(D)=0\) by the case of diagonal matrices. Furthermore, in this argument we may just as well allow matrices \(S\) with entries in some extension field of \(k\), and we see that it suffices to assume that \(A\) is diagonalizable over \(\mathbb C\). But of course, there are also non-diagonalizable matrices.
So we consider a matrix
where we use \(a\), \(b\), \(c\) as coordinates on \(\mathbb R^3\). We then have
In particular we see that all matrices \(A = \begin{pmatrix} a & b \\ c & -a \end{pmatrix}\) with \(a^2 + bc \ne 0\) are diagonalizable over \(\mathbb {C}\). On the other hand, if \(a^2 + bc = 0\), then \(A\) is not necessarily diagonalizable.
We now consider the map:
Our goal is to show that the map \(\chi \) is constant with image the zero matrix. By what we have said, \(\chi (A) = 0\) for all those \(A\) that are diagonalizable over \(\mathbb C\).
Since the map \(\chi \colon \mathbb R^3\to \mathbb R^4\) is given by polynomials, it is continuous. Therefore for every closed subset of \(\mathbb R^4\), its inverse image under \(\chi \) is again closed. We apply this to the set \(\left\{ 0 \right\} \) containing only the zero matrix; clearly this is a closed set. Its inverse image contains, by what we know already, all those traceless matrices that are diagonalizable over \(\mathbb C\), and in particular all matrices \(\begin{pmatrix} a & b \\ c & -a \end{pmatrix}\) with \(a^2+bc\ne 0\). But this set is dense in \(M_2(\mathbb R)^{{\rm tr}=0}\), i.e., its closure is the whole space. It follows that \(\chi ^{-1}(\left\{ 0 \right\} ) = M_2(\mathbb R)^{{\rm tr}=0}\), as we wanted to show.
The same argument, with small modifications, applies when we drop the condition on the trace, and also for square matrices of arbitrary size.
Question: How to deal with other fields?
For this, we need a notion of continuous map in a more general context.
Let \(k\) be a field. Since we want to study solution sets of systems of polynomial equations, we introduce the following notation:
- Given \(f_1, \ldots , f_m \in k[T_1, \ldots , T_n]\), we define the vanishing set (in German: Verschwindungsmenge) \[ V(f_1, \ldots , f_m) := \{ (t_i) \in k^n ;\ f_j(t_1, \ldots , t_n) = 0 \ \forall j \} . \]
- More generally, for any subset \(\mathcal F \subset k[T_1, \ldots , T_n]\), we define the vanishing set of \(\mathcal F\) as \[ V(\mathcal F) = \{ (t_i) \in k^n ;\ f(t_1, \ldots , t_n) = 0 \ \forall f\in \mathcal F \} . \]
If \(k'/k\) is a field extension, then we set
and analogously define \(V(\mathcal F)(k')\).
The sets \(V(\mathcal F)\), \(\mathcal F \subset k[T_{\bullet }]\), form the closed sets of a topology on \(k^n\), the Zariski topology.
Spelled out explicitly, this means that
- \(\emptyset \), \(k^n\) are of this form, 
- finite unions of such sets are again of this form, 
- arbitrary intersections of such sets are of this form. 
(1) We have \(\emptyset = V(1)\), \(k^n = V(0)\).
(2) By induction, it is enough to consider the union of two closed subsets, say \(V(\mathcal F)\) and \(V(\mathcal G)\). But
In fact, the inclusion \(\subseteq \) is clear. For the other inclusion, take a point \(t\) in the right hand side which does not lie in \(V(\mathcal F)\). That means \(f(t)\ne 0\) for some \(f\in \mathcal F\). But since \(f(t) g(t) = (fg)(t) = 0\) for all \(g\in \mathcal G\), it follows, that \(t\in V(\mathcal G)\).
(3) For \(\mathcal F_j \subseteq k[T_1, \ldots , T_n]\), \(j \in J\), we have
Oct. 15, 2025
Next, let us look at Bézout’s theorem, a relatively elementary, but still non-trivial result in algebraic geometry which at the same time illustrates a typical type of question asked in this theory and several methods that are crucial in (almost) all of algebraic geometry. In particular, it will serve as a motivation for introducing the so-called projective space, see Section 1.5.Let \(k\) be a field. For a polynomial \(f\in k[X, Y]\), as before we write
and call this set the vanishing set of \(f\).
We want to study what we can say, given two such polynomials \(f\), \(g\), about the set \(V(f)\cap V(g)\). More specifically, examples show that typically, this is a finite set, and it is a natural question whether we can determine its cardinality. We start with the following observations:
- For a polynomial \(p\in k[X]\), \(n = \deg (p) {\gt} 0\), we have \[ \# \left\{ x\in k;\ p(x) = 0 \right\} \le n, \]- with equality if \(k\) is algebraically closed and if we count each zero \(x\) of \(p\) with its multiplicity \({\rm ord}_x(p) = \max \left\{ r;\ (X-x)^r \mid p \right\} \). 
- Let \(p \in k[X]\) non-constant and let \(f = Y-p(X)\), \(g = Y\). We then have a bijection \[ \left\{ x\in k;\ p(x)=0 \right\} \longleftrightarrow V(f)\cap V(g),\quad x\mapsto (x, 0). \]
Coming back to the general case, let \(f, g\in k[X, Y]\). Recall that \(k[X, Y]\) is a unique factorization domain. It is easy to see that in case \(f\) and \(g\) have a common divisor of positive degree, then \(V(f)\cap V(g)\) is infinite, at least when \(k\) is algebraically closed. Since here we are interested in counting points, we rule out that case, and require that \(f\), \(g\) are coprime. For a polynomial \(f\in k[X, Y]\), we denote by \(\deg (f)\) its total degree, i.e., for \(f = \sum _{i, j} a_{ij}X^iY^j\), \(\deg (f) = \max \left\{ i+j;\ a_{ij}\ne 0 \right\} \).
We will prove this result later, in an improved form. For now, our goal is to discuss this “improved form”, by which we mean a refined statement where we actually have equality.
Looking back at the case of a single-variable polynomial \(p\) above, it is reasonable to require that \(k\) is algebraically closed, and also to expect that we will have to count intersection points with their correct “multiplicity”. It is not so hard to write down the definition of multiplicity that will work; we will discuss this in more detail later.
However, looking at the case where \(V(f)\) and \(V(g)\) are parallel lines in \(k^2\) (e.g., \(f = Y\), \(g = Y-1\)), we see that these changes are not enough in order to obtain equality.
Idea. Add points to \(k^2\) so that any two different lines intersect in a point. (While this at first may feel like cheating, it turns out that the resulting construction is extremely useful in algebraic geometry, far beyond Bézout’s theorem, also in the sense that it will allow to come back and answer questions that do not mention the newly constructed space.) Setting up the theory will also involve suitably modifying the notion of line; we will come to that later, and then also relate it to lines in \(k^2\).
Viewing \(k^2\) as the affine plane \(\left\{ (x, y, 1)\in k^3;\ x, y\in k \right\} \) in \(k^3\), every line through the origin in \(k^3\) which is not contained in the \(x\)-\(y\)-plane intersects \(k^2\) in exactly one point. Thus we obtain an injective map \(k^2\to \mathbb P^2(k)\) which we may also write as
In this way, we may view \(\mathbb P^2(k)\) as “\(k^2\) with some points added”, namely the lines in the \(x\)-\(y\)-plane (note that thus for any equivalence class of parallel lines in \(k^2\) we have one additional point, and it will turn out that this point “is” (in a sense that we yet must define) the missing intersection point of these parallel lines).
Usually we denote elements of \(\mathbb P^2(k)\) in terms of their homogeneous coordinates which we are going to define next. (That also facilitates, hopefully, to think of elements of \(\mathbb P^2(k)\), typically, as points of some space rather than as lines in some other space, similarly as we think of the elements of \(k^2\) as points in the plane.)
For \((x,y,z), (x',y',z') \in k^3 \setminus \{ 0\} \), define:
This is an equivalence relation on \(k^3 \setminus \{ 0\} \). We denote by \((x : y : z)\) the equivalence class of \((x, y, z)\) and obtain a bijection
Our next task is to define a suitable notion of line in the projective plane. The resulting notion should satisfy (at least) the properties that through any two distinct points, there is a unique line; and that any two distinct lines intersect in a unique point (because our goal was a situation where there are no more “parallel lines”). For the definition, however, a general construction is better suited, namely an analog of the notion of vanishing set of polynomials. However, we have to be careful here, because for an arbitrary polynomial \(F\in k[X, Y, Z]\) the value on a point \((x:y:z)\) given in homogeneous coordinates is obviously not well-defined, but will depend on the choice of representative. On the other hand, in order to define vanishing sets, we do not need to compute values, but only need to check whether the outcome is \(=0\) or \(\ne 0\). Even this is not possible for general polynomials, but it is possible for the class of homogeneous polynomials, which is still large enough to give all that we need. We give the definition in a general form.
The first statement is clear. The second one follows (how?) from the fact that over an infinite field, the zero polynomial is the only polynomial in \(n+2\) variables which vanishes at every point of \(k^{n+2}\).
Therefore we may define the vanishing set of a homogeneous polynomial, and more generally, the common vanishing set of a family of homogeneous polynomials (possibly of different degrees). We will look at several explicit examples soon.
Similarly as for \(k^n\), one proves the following.
Oct. 21, 2025
Lines in \(\mathbb P^2(k)\)
We can now define the notion of line in the projective plane and conclude this section by stating the final form of Bézout’t theorem.
Explicitly, \(F\) as in the definition has the form \(aX+bY+cZ\), with \((a,b,c)\ne (0,0,0)\). For example \(V_+(Z) = \mathbb {P}^2(k) \setminus \iota (k^2)\) (where \(\iota \colon k^2\to \mathbb {P}^2(k)\) is the embedding defined above) is a line. This line is called the line at infinity (with respect to our chosen embedding \(k^2 \subset \mathbb {P}^2(k)\)).
- Let \(P_1, P_2\in \mathbb P^2(k)\), \(P_1\ne P_2\). Then there exists \(F\in k[X, Y, Z]\) homogeneous of degree \(1\), \(F\ne 0\), such that \(P_1, P_2\in V_+(F)\), and \(F\) is uniquely determined up to multiplication by an element \(\lambda \in k^\times \). 
- For non-zero linear homogeneous polynomials \(F_1, F_2\in k[X, Y, Z]\), we have \[ V_+(F_1) = V_+(F_2)\quad \Longleftrightarrow \quad \text{there exists}\ \lambda \in k^\times : F_2 = \lambda F_{1}. \]
- Let \(F_1, F_2\in k[X, Y, Z]\) be non-zero linear homogeneous polynomials with \(V_+(F_1)\ne V_+(F_{2})\). Then the set \(V_+(F_1)\cap V_+(F_2)\) consists of exactly one element. 
(1) Phrase the problem as a system of linear equations on the coefficients of \(F\). We obtain a system with two linearly independent equations and three variables, so the space of solutions is \(1\)-dimensional.
(2) This follows from Part (1) (because any \(V_+(F_1)\) contains at least \(2\) points (more precisely: \(\# k + 1\) points)).
(3) Similarly as Part (1) this can be shown by considering a suitable system of linear equations, where the coefficients are given by the coefficients of the equations of \(F_1\) and \(F_2\), and the variables correspond to the homogeneous coordinates of the point(s) we are looking for in the intersection.
We can now state the final version of Bézout’s theorem. Here, \(i_P(F, G)\) is defined similarly as above. (As before, it depends on the actual polynomials \(F\), \(G\), not just on their vanishing sets.) We will come back to this, and also give a proof of the theorem, later in the course.
Similarly to the projective plane, we can analogously define projective space of dimension \(n\) over \(k\),
where \((x_0, \dots , x_n) \sim (x_0', \dots , x_n')\) if there exists \(\lambda \in k^\times \) such that \(x_i' = \lambda x_i\) for all \(i\).
Let us look at the relationship between vanishing sets in \(k^2\) and in \(\mathbb P^2(k)\).
Conversely, given a polynomial \(f\in k[x, y]\) we can easily find a homogeneous polynomial such that \(f(x,y) = F(X, Y, 1)\) (and hence, by the above remark, \(V(f) = V_+(F)\cap \iota (k^2)\), or in other words, \(V_+(F)\) consists of \(V(f)\) and (possibly) further points lying on the line at infinity \(V_+(Z)\)).
Namely, we just “fill in powers of \(Z\)” so as to construct a homogeneous polynomial of degree \(\deg (f)\). For example, for \(f = y^2 - x^3 + x + 1\), we would take \(F = Y^2Z - X^3+XZ^2+Z^3\). Generally, given \(f = \sum _{i,j} a_{i,j} x^iy^j\) of degree \(d\), take \(F = \sum _{i,j} a_{i,j} X^i Y^j Z^{d-i-j}\). We call \(F\) the dehomogenization (of degree \(d\)) of \(f\).
Note that for \(f\) and \(F\) related in this way, the polynomial \(G = Z\cdot F\) still has the property that \(G(x,y,1) = f(x,y)\), however \(V_+(G) = V_+(F) \cup V_+(Z)\), i.e., we get an “unnecessary” (and unwanted) copy of the line at infinity.
Let \(k\) be a field, \({\rm char}(k)\ne 2\). A vanishing set \(V(f) \subset \mathbb {A}^2(k)\) for a polynomial \(f\) of degree \(3\) is called a cubic curve.
Oct. 22, 2025
From the examples and the situation over the real and complex numbers, we would like to make the following definition, as one example that we can use some geometric insight, while formally “only manipulating algebraic expressions” (in this case, taking derivatives of polynomials).
This definition does not really make sense! (that’s why I put a *) – more precisely, the property of being a smooth point depends on the polynomial \(f\), not just on the subset \(V(f) \subset k^2\). For example, \(V(X) = V(X^2)\), and using the partial derivatives of \(f=X\), all points are smooth, but using \(f = X^2\) instead, all points are singular. This illustrates that the set \(V(f)\) (even if we equip it with the induced topology for the embedding into \(k^2\) with the Zariski topology) alone does not carry enough “structure” in order to really do geometry.
For now we will therefore view this as a “definition we would like to make for \(V(f)\), but can currently only make after fixing \(f\)”. A little later in the course we will be in a position to fix this problem.
If \(k\) is algebraically closed, then there is another option to proceed. (The fact that this is option is not viable for general fields is the reason that “classical” algebraic geometry, e.g., as in [ GW1 ] Chapter 1 or [ Ha ] Chapter I, is done over an algebraically closed base field.)
To formulate this, recall that a ring \(R\) is called reduced, if it has no non-zero nilpotent elements, i.e., whenever \(x^n = 0\) for some \(x\in R\), \(n\ge 1\), we must have \(x =0\). For a polynomial \(f\in k[x,y]\), the quotient is reduced if and only if there does not exist an irreducible polynomial \(g\in k[x,y]\) such that \(g^2\mid f\). In other words, in the decomposition of \(f\) into irreducible polynomials in the unique factorization domain \(k[x,y]\) each irreducible factor occurs only once.
If \(f\in k[x,y]\) is a non-constant polynomial and \(f = f_1^{i_{1}}\cdot \cdots f_{r}^{i_{r}}\) is a decomposition of \(f\) with \(f_i\) irreducible and pairwise distinct, then clearly \(V(f) = V(f_1\cdot \cdots \cdot f_r)\), i.e., changing the exponents does not change the vanishing set. It is therefore clear that every \(V(f)\) can also be written as the vanishing set of a polynomial for which \(k[x,y]/(f)\) is reduced.
Over an algebraically closed field, we have the following strong converse: Given \(V \subset k^2\) that has the form “vanishing set of one non-constant polynomial”, there is a unique (up to multiplication by scalars in \(k^\times \)) polynomial \(f\in k[x,y]\) such that \(V=V(f)\) and such that the ring \(k[x,y]/(f)\) is reduced (i.e., it has no non-trivial nilpotent elements). When we use this \(f\), we get the “right” notion of smooth points. (In fact, it is not difficult to show that for \(f\) such that \(k[x,y]/(f)\) is not reduced, all points are non-smooth in the sense of the above definition applied to \(f\).)
In fact, there is the following more general version of this statement. For an ideal \(\mathfrak a \subset k[T_1, \dots , T_n]\) we denote by
its radical. (With notation as above, \(\sqrt{(f)} = (f_1\cdot \cdots \cdot f_r)\).) It is easy to see that \(k[T_1,\dots , T_n]/\mathfrak a\) is reduced if and only if \(\mathfrak a = \sqrt{\mathfrak a}\), and that \(V(\mathfrak a) = V(\sqrt{\mathfrak a})\). Furthermore, we have:
This is (one version of) Hilbert’s Nullstellensatz. The implication \(\Leftarrow \) is easy, as indicated above, and does not require the assumption that \(k\) is algebraically closed. The other implication is non-trivial already in the case that \(\mathfrak a = (1)\), so \(V(\mathfrak a) = \emptyset \). In this case the statement is equivalent to saying that any family \(f_1, \dots , f_r\) of polynomials that does not generate the unit ideal has a common zero, whence the name Nullstellensatz (Nullstelle is German for zero (of a polynomial)).
We will take up this discussion again, and in more detail, later.
Another perspective on the situation over the real numbers (and similarly over the complex numbers) is the Theorem on inverse functions. It implies, if \(P\) is a smooth point of \(V(f)\) in the sense of the above definition, that locally (in the analytic, “usual”, topology) around \(P\) the set \(V(f)\) is diffeomorphic to an open interval in \(\mathbb R\), i.e., there exists an open \(U \subset V(f)\), \(P\in U\), and an open interval \(V \subset \mathbb R\), and bijective differentiable functions \(U\to V\) and \(V\to U\) that are inverse to each other.
More generally, there is a version for vanishing sets (or more generally, level sets) of continuously differentiable maps \(\mathbb {R}^n\to \mathbb {R}\), and even more generally for fibers of continuously differentiable maps \(f\colon \mathbb {R}^n\to \mathbb {R}^m\), \(x\mapsto (f_1(x), \dots , f_m(x))\), such that the Jacobi matrix (at some point \(P\)),
has rank \(m\). Then locally around \(P\), the fiber over \(f(P)\) is a differentiable manifold, i.e., is diffeomorphic to an open of \(\mathbb {R}^{n-m}\).
See Inverse function theorem (Wikipedia), in particular the section Giving a manifold structure.
For the projective plane, we make the following analogous definition (which again depends on the polynomial \(F\), not only on the vanishing set, cf. Remark 1.24
For the following remarks, the next lemma will be useful; we record it here in the general case of \(n+1\) variables. Also note that for a homogeneous polynomial of degree \(d\), all partial derivatives are homogeneous of degree \(d-1\).
Since both sides are \(k\)-linear in \(F\), it is enough to check this in case \(F = X_0^{\nu _{0}} \cdots X_n^{\nu _n}\) is a monomial. But then \(\frac{\partial F}{\partial X_i} X_i = \nu _i F\) and the stated identity follows immediately.
- Euler’s identity shows that the tangent line to a smooth point of \(V_+(F)\) contains the point \(P\). 
- The two definitions of smooth point are related as follows. Let \(F \in k[X, Y, Z]\) be a homogeneous polynomial, \(f = F(x, y, 1)\), so that \(V_+(F) \cap \iota (k^2)\) may be identified with \(V(f) \subset k^2\). Cf. Section 1.6. We assume that \(f\) is non-constant, and take \(P\in V(f)\), say \(P = (x_0, y_0)\), and \(\iota (P)=(x_0:y_0:1)\). - Then \begin{equation} \label{eq: partial der} \frac{\partial F}{\partial X}(x,y,1) = \frac{\partial f}{\partial x},\quad \frac{\partial F}{\partial Y}(x,y,1) = \frac{\partial f}{\partial y}, \end{equation}1- as is easily checked, and in particular \[ \frac{\partial F}{\partial X}(x_0, y_0, 1) = \frac{\partial f}{\partial x}(P),\quad \frac{\partial F}{\partial Y}(x_0, y_0, 1) = \frac{\partial f}{\partial y}(P). \]- This already shows that if \(P\in V(f)\) is smooth (with respect to the polynomial \(f\), that is), then \(\iota (P)\) is a smooth point of \(V_+(F)\) (i.e., for \(F\)). To show the equivalence, assume that \(\iota (P)\in V_+(F)\) such that the partial derivatives of \(F\) with respect to \(X\) and to \(Y\) both vanish. Then Euler’s identity shows, since \(F(\iota (P))=0\), that the partial derivative with respect to \(Z\) vanishes, as well, so \(\iota (P)\) is not smooth. - Finally, for a smooth point with tangent line \(V_+(L)\) to \(V_+(F)\) at \(\iota (P)\), \[ L = \frac{\partial F}{\partial X}(P)\cdot X + \frac{\partial F}{\partial Y}(P) \cdot Y + \frac{\partial F}{\partial Z}(P) \cdot Z, \]- equation 1 shows that \(V(L(x,y,1))\) is the tangent to \(V(f)\) at \(P\). In this sense, the two definitions are compatible. 
Let us understand the notion of smoothness in the special case of cubic curves (compare the earlier examples), more precisely for \(V(f)\) with \(f\) of the form
As before, assume \({\rm char}(k)\ne 2\).
Then
(This shows why the situation is different in characteristic \(2\), namely then the partial derivative with respect to \(y\) vanishes for all points.)
The points \((x_0,y_0)\in V(f)\) where both partial derivatives vanish satisfy
i.e. \(x_0\) is a multiple root of \(g\).
We may homogenize \(f\) to obtain
homogeneous of degree \(3\). By Remark 1.29, \(P\in V(f)\) is smooth if and only \(\iota (P)\in V_+(F)\) is smooth, where as usual \(\iota \) denotes the embedding \(k^2\to \mathbb {P}^2(k)\). Let us check smoothness at those points of \(V_+(F)\) that lie on the line at infinity, i.e., points of the form \((x_0: y_0:1)\in V_+(F)\). Then the vanishing of \(F\) amounts to \(x_0 = 0\), and since this excludes the possibility of \(y_0\) vanishing as well, and we can scale the homogeneous coordinates, we see that \(V_+(F)\cap V_+(Z)\) consists of the one point \((0:1:0)\).
At this point, the partial derivative \(\dfrac {\partial F}{\partial Z} = Y^2-aX^2 -2bXZ-3cZ^2\) does not vanish, so it is a smooth point, independently of the choice of \(a, b, c\). Therefore, for this special form of \(f\) and \(F\), \(V_+(F)\) is smooth if and only \(V(f)\) is smooth, if and only if \(g\) is separable.
Oct. 28, 2025
Typical examples are the curves defined by homogenizations of polynomials of the form
that we have studied above. In this case, we can (and typically do) choose the unique point \((0:1:0)\) of \(V_+(F)\) on the line at infinity as the distinguished point \(\mathcal O\).
These elliptic curves have an extremely surprising additional structure, as shown by the next proposition. We will assume that \(k\) is algebraically closed, so that we can use Bézout’s theorem; but see the following remark.
Let \(k\) be algebraically closed. Let \(E=V_+(F) \subset \mathbb {P}^2(k)\) be smooth with \(\deg F=3\), and let \(\mathcal{O}\in E\) be a fixed point. For \(P,Q\in E\), let \(L\subset \mathbb {P}^2(k)\) be the line through \(P,Q\) (or, in case \(P=Q\), the tangent to \(V_+(F)\) at \(P=Q\)).
Then, counting with multiplicities, the intersection \(E\cap L\) has three elements, among them \(P\) and \(Q\); we express this, and give these points names, by saying that “\(E\cap L=\{ \{ P,Q,R\} \} \) as a multiset”. Let \(M\) be the line through \(\mathcal{O}\) and \(R\) (or, in case of equality, the tangent to \(V_+(F)\) at this point), write \(E\cap M = \{ \{ \mathcal O, R, S \} \} \) and define
Then \((E,+)\) is a commutative group with neutral element \(\mathcal{O}\).
All properties except for associativity are easy to check. The neutral element is the point \(\mathcal O\). For a point \(P\), its negative \(-P\) is the third point in the intersection of \(V_+(F)\) and the line through \(P\) and \(\mathcal O\). The associativity can, in principle, be checked by “direct computation” (write out equations for all the lines involved in terms of coordinates of the points for which one wants to check associativity), but this leads to long, complicated and tedious calculations which are not at all enlightening. For a better, but still elementary proof, see, e.g., [ Kn ] Section III.3 for a complete proof; cf. also the discussion in [ ST ] 1.2.
(We will later, but rather next term than this term, be able to give a more enlightening proof based on the Theorem of Riemann–Roch.)
Outlook: Advanced results and some open conjectures
 
Oct. 29, 2025
From a number theoretic view, it is an interesting question to determine the number of points of a vanishing set \(V_+(F) \subset \mathbb {P}^2(k)\) when \(k\) is a number field, i.e., a finite extension of \(\mathbb Q\). For example, if \(F\) is linear, then \(V_+(F)\) evidently has infinitely many points (whenever \(k\) is any infinite field; for a finite field of cardinality \(q\), it has \(q+1\) points). For \(F\) homogeneous of degree \(2\) the situation is still relatively easy to understand (but we skip this here). However for \(F\) of degree \(\ge 3\), this is an extremely difficult question, and although a lot of progress has been made over the last 50 years, there are many questions that are still open. We first mention the Theorem of Mordell and Weil that dates back even further and gives some important information in the case of homogeneous cubic polynomials which define a smooth curve, i.e., an elliptic curve.
Depending on the choice of polynomial \(F\), the group might be finite or infinite. By the general theory of finitely generated abelian group, we can find a group isomorphism \(E(K) \cong \mathbb {Z}^r\times T\) for a finite group \(T\) and some natural number \(r\ge 0\), called the rank of \(E\). Even in the case \(K=\mathbb {Q}\), there are many open problems around the rank. For example, it is not known whether elliptic curves over \(\mathbb {Q}\) of arbitrarily high rank exist. At the time of writing, the best result in this direction is by Elkies and Klagbrun who found (in 2024) an elliptic curve of rank \(\ge 29\). The Conjecture of Birch and Swinnerton-Dyer relates the rank of an elliptic curve to a natural number defined in analytic terms (the vanishing order of a certain holomorphic function, the so-called L-function of the elliptic curve).
For a proof of the theorem, see [ ST ] Chapter 3 (for \(K=\mathbb {Q}\)), or [ Si ] Chapter VIII.
For polynomials of higher degree Mordell conjectured that there are only finitely many solutions with coordinates in a fixed number field. This conjecture was proved in 1983 by Faltings, and he received the Fields medal in 1986 in recognition for this proof. We state the result in a slightly more general form (which you can ignore for now, and just read it in the case of the specific example of vanishing sets \(V_+(F)\) in \(\mathbb {P}^2(k)\)).
[Mordell Conjecture = Faltings’s Theorem] Let \(K/\mathbb {Q}\) be a finite field extension and \(C/K\) a smooth projective curve of genus \(g\ge 2\), e.g., \(C = V_+(F)\subset \mathbb {P}^2(k)\) with \(F\) homogeneous of degree \(\ge 4\) such that \(V_+(F)\) is smooth.
Then \(C(K)\) is a finite set.
It follows from Faltings’s Theorem that the set on the left hand side is finite whenever \(p {\gt} 3\), but that theorem does not give any information on the cardinality of this finite set. Wiles’s contribution was the following more specific result about elliptic curves over \(\mathbb {Q}\).
Actually, Wiles (together with Taylor) proved a slightly weaker than the theorem stated here; the proof was later completed by Breuil, Conrad, Diamond and Taylor. Ribet, based on an idea of Frey 1 , had shown before that this modularity conjecture implies Fermat’s Last Theorem. The key idea of Frey was that assuming that \(a^p+b^p=c^p\) for \(abc\ne 0\), the elliptic curve defined by the (homogenization of the) equation
has “strange” properties and is seemingly not modular; this was then shown by Ribet.
We do not explain here what modular means. Roughly, it asserts a strong relation between the elliptic curve and a certain “modular form”.
For example, if \(E\) is given by \(y^2 = x^3 + Ax + B\) with \(A,B\in \mathbb {Z}\), then modularity implies a precise regularity for the numbers of points
for each prime power \(q\).
We finish this chapter by a brief discussion of another famous conjecture which at first sight does not have much to do with algebraic geometry (but in fact, it does: for instance, it is equivalent to a conjecture by Szpiro on elliptic curves over \(\mathbb {Q}\); indeed, Masser and Oesterlé made their conjecture after studying Szpiro’s conjecture and its consequences).
We define the radical of a positive integer \(n\) as
We also state the following stronger variant, an explicit form of the abc conjecture. If \(a,b,c\in \mathbb {Z}_{{\gt}0}\) are coprime with \(a+b=c\), then
- It is, somewhat surprisingly, not difficult to prove an analogous statement, where the ring \(\mathbb {Z}\) of integers is replaced by the polynomial ring \(\mathbb {C}[X]\). See the second problem sheet. 
- The abc conjecture implies effective versions of the Mordell Conjecture/Faltings’s Theorem. 
Let us illustrate by showing that the above effective version of the abc conjecture easily implies Fermat’s Last Theorem for exponents \(n\ge 6\).
In fact, suppose there exist \(n\in \mathbb {N}\) and coprime positive integers \(x,y,z\) with \(x^n+y^n=z^n\). Then
by the abc conjecture, but also
Putting both inequalities together, we obtain \(n{\lt}6\). The cases \(n=3,4,5\) are (relatively) easier and have long been known, so the (effective) \(abc\)-conjecture implies Fermat’s Last Theorem.
What we have discussed so far was intentionally introductory and not yet systematic. Beyond that, the “theory” so far has some serious problems. Some are easy to fix; others require more serious changes. Desiderata:
- The same vanishing set \(V(f)\) (or more generally, \(V(f_1, \dots , f_m)\)) can be defined by several different polynomials, and the set alone does not “contain enough information” (for example, in order to define smoothness). We would like to equip it with more “geometric structure” which will allow us to not carry around a specific choice of polynomial(s). 
- Related to this: A definition of morphisms (and hence isomorphisms) between vanishing sets \(V(f_1,\dots ,f_m)\). 
- A more systematic use of commutative algebra. 
- A theory that works well over non-algebraically closed fields (and even over arbitrary commutative rings). 
- A more transparent geometric meaning of intersection multiplicities \(i_P(V_+(F),V_+(G))\) in Bézout’s theorem (see earlier sections). 
- GW1
- U. Görtz, T. Wedhorn, Algebraic Geometry I: Schemes, 2nd ed., Springer Spektrum. 
- Ha
- R. Hartshorne, Algebraic Geometry, Springer Graduate Texts in Math. 
- Kn
- A. Knapp, Elliptic Curves, Princeton Univ. Press 1992. 
- Si
- J. Silverman, The Arithmetic of Elliptic Curves, 2nd ed., Springer Graduate Textes in Math. 
- ST
- J. Silverman, J. Tate, Rational Points on Elliptic Curves, 2nd ed., Springer