In today’s blog we talk to Dr. David Bressoud, PhD from Temple University, and DeWitt Wallace Professor of mathematics at Macalester College, on the origin and understanding of the concept of infinitesimal.

David G: To the extent that I understand them, Infinitesimals are elusive quantities that have been around for over 2 millennia, but fully formalized only in the 20th century. Can you describe the evolution of this abstract concept including objections, over that time period?

David B: The concepts of infinity and infinitesimal are slippery and can easily lead to apparent paradoxes. An infinitesimal is a non-zero amount that is smaller than any positive amount. It is common to imagine it as a very, very, very small amount, but consider the following question. The length of each jagged line shown in the image at the bottom is the square root of 2. But if we shrink these line segments down to infinitesimals, we get the line from 0 to 1, which has length 1. Why is that length not also the square root of 2?

Throughout the world we find early cultures that thought of finding areas and volumes by taking infinitely many infinitely thin slices. This is how the formula for the area of a circle was discovered independently in many different places. But the Hellenistic philosophers of 2300 years ago learned not to trust these arguments because they could mislead. Instead, they searched for ingenious ways to use finite numbers of slices to verify that the actual value could be neither larger nor smaller than the asserted value. Euclid specifically ruled out infinitesimals when he asserted that given any two positive amounts, there is always some finite multiple of the first that is larger than the second, and vice-versa. No finite multiple of an infinitesimal can be larger than any actual positive amount.

In the 16th century Europe rediscovered the work of the Hellenistic philosophers and translated it into Latin, the common language of science. For the next two hundred years philosophers or scientists often used this idea of infinitely many infinitely small pieces as they sought general methods to solve problems that we now couch in terms of definite integrals. There was widespread unease with this approach because they knew that it could lead them astray. But the rigorous Hellenistic methods were simply too cumbersome to keep up with the outpouring of important results. By the 18th century, total reliance on infinitesimals had gained favor.

Serious errors and paradoxes emerged in the early 19th century as scientists began to work with infinite sums of trigonometric functions, what are known as Fourier series. Cauchy, Weierstrass, and others realized that they needed to banish infinitesimals. They developed tools that returned to the ideals of Hellenistic rigor, verifying with finite arguments that the true answer could neither be larger nor smaller than the purported answer. Among these tools are the epsilon-delta definition of limit and the Darboux sums used to delimit the range of possible values of a definite integral. While the 1960s would see the invention of a totally consistent and rigorous arithmetic within which infinitesimals exist, this did not address the role of infinitesimals as employed to solve problems found in physics or chemistry or engineering.

It is often useful to think of the differential dx in a definite integral as an infinitely small piece that is visualized as very, very, very small. Even mathematicians rely on this when thinking about how to translate an accumulation problem into a definite integral. But mathematicians are also aware that this naive approach has the potential to lead one astray. If one needs to employ differentials in an unusual manner and the results appear questionable—or worse, nonsensical—then it may be necessary to pull out the big guns developed in the 19th century.

David G: That Newton invented calculus and then used it to solve for the motions of the planets makes him superhuman. What is a more balanced description of Newton’s connection to the calculus?

David B: Virtually all of the basic tools that students encounter in a first-year course of calculus: the derivative, the integral, and series, were well developed before Newton entered the scene. Scientists had been using the derivative to find tangents and solve optimization problems and the definite integral to find areas, volumes, and even arc lengths decades before Newton went to college. Newton demonstrated an unparalleled facility in the use of these tools, but his real genius lay in two brilliant insights.

The first was that accumulation problems, the kinds of summations encapsulated in a definite integral, could be solved by reversing the process of differentiation. Today we call this the fundamental theorem of calculus, although I prefer the original name, the fundamental theorem of integral calculus. It connects two totally different ways of understanding integration: an accumulation problem involving the limit of a summation and the change in an antiderivative of the integrand. Actually, others including Barrow and Gregory had discovered this connection before Newton. Newton's breakthrough came from his realization of how incredibly powerful this was as a tool and an organizing concept for the many disparate results that were laying around. That is an important lesson for the scientist. Often, the greatest contribution is not a new discovery but the recognition among what is already known of what is truly important and central.

The second was directly related to understanding the motion of the planets. The tools of calculus that Newton picked up as a student had all been applied to geometric problems: finding areas, volumes, and tangents. He was the first to recognize that these could provide insight into the workings of dynamical systems. His Mathematical Principles of Natural Philosophy became the template for mathematical modeling. He first set up simple differential equations that encapsulated the basic dynamic relationships. He then built a structure on top of these that progressively added layers of complexity as he sought to more thoroughly model the full nature of our solar system. Again, a lesson for the scientist. Seeing new and unexpected potential in existing results can revolutionize our understanding of the world.

David G: In physics we routinely think of a derivative as a ratio of two differentials. What is the history of that and is it true that it irritates the pure mathematician?

David B: We owe the derivative notation, as a ratio of differentials, to Leibniz, who also invented our integral notation that alludes to the fact that it is like a summation. Leibniz did not go as far as to assert that the derivative actually was a ratio of differentials, nor that the integral actually was a sum. But he realized the usefulness of thinking of them this way. It was the Bernoulli's, two Swiss brothers whom Leibniz mentored, who totally embraced the idea of infinitesimals as actual mathematical entities with which one could work. To a mathematician, if y is a function of x, y = f(x), then their differentials are defined by the relationship dy = f' (x) dx. So the ratio of differentials means f' (x). Thus to a mathematician, saying that the derivative is the ratio of differentials is to say nothing.

David G: The concept of ‘work’ brings together physics, algebra, and calculus, and is therefore both abstract and tricky. I motivate its definition based on physical ideas, and while I am partially successful in conveying the physical reason for the dot product by thinking about speeding up/down an object or simply changing its direction (circular motion), I am less successful with the notion of an infinitesimal under the integral sign. In fact, students routinely show me calculations with finite quantities under an integral sign. Does this happen in calculus class as well, or is there something that happens in the transition from math to physics?

David B: The dot product of two vectors is found by decomposing the first vector into two components, one parallel to the second vector and the other perpendicular. The value of the dot product is the magnitude of the second vector multiplied by the magnitude of the component of the first vector in the parallel direction. Make it negative if they point in opposite directions. Work is force times distance. When we are looking at the work done by a force, the only part of the force that is doing work is the component parallel to the motion. The differential of motion (dr) can be thought of as a vector describing a small displacement. We take the dot product of the force with this vector because we are only interested in the component of the force in the direction of motion. Adding these up and taking the limit gives us the definite integral of F dot dr.

David G: The derivations of the algebra-based mechanics course focused on finite differences makes it difficult for students to grasp the principles, and memorization tends to replace understanding, as a result. Do you know what the origin of this course is?

David B: No, I don't. But I can say a bit about the origins of vector algebra. The following is taken from the opening to chapter 2 of my book Second Year Calculus: from celestial mechanics to special relativity:

Vector algebra came into its own not because it helped in understanding celestial mechanics, but because it clarified the then emerging explanations of electricity and magnetism. With roots in the work on quaternions by William Hamilton (1805--1865) and the calculus of extension by Hermann Gunther Grassman (1809--1877), vector algebra began to gain acceptance when it was employed by James Clerk Maxwell (1831--1879) in his explanations of electricity and magnetism in the 1870s. It received its first full published exposition in 1893 in the first volume of Oliver Heaviside's (1850--1925) Electromagnetic Theory. With the publication of Vector Analysis in 1901 by Edwin B. Wilson (1879--1964), based on lectures by J. Willard Gibbs (1839--1903), the language of vector algebra became entrenched in mathematical physics.

David G: Thank you Professor!