Investigating limits with julia

The concept of a limit is what makes calculus possible. The formal definition of a limit is difficult to appreciate, but very important for building a firm foundation to build on. The intuition of limits was known as early as the Greeks: Archimedes figured out the area under a parabola over 2000 years ago by the method of exhaustion, a limiting process. Fermat in 1629 essentially took a limit to find the slope of a tangent line to a polynomial curve. Newton in the late 1600s, exploited the idea in his development of calculus (as did Leibnez). Yet it wasn't until the 1800s that Cauchy and Weierstrass put the idea on a firm footing with the \(\epsilon-\delta\) formulation of a limit. We leave that for a calculus class, and instead focus here on the intuition.

The intutitive idea of

\[ \lim_{x \rightarrow c} f(x) = L \]

Is the idea that as \(x\) gets "close" to, or approaches \(c\), then \(f(x)\) should get arbitrarily close to \(L\).

Get close

It is this intuitive idea we can approach with the computer, though we shall see that on the computer using floating point representations there are fundamental limits to how close we can get to a value.

The key to our analysis here is

  • defining how we get close to \(c\)

  • imagining how \(f(x)\) is getting close -- if it is -- to some \(L\) which may be unknown.

Example:

sin(x)/x

Let's look at a simple example, the limit of \(f(x) = \sin(x)/x\) as \(x\) approaches \(0\) (a function Euler investigated as early as 1735). Even though both \(\sin(x)\) and \(x\) are continuous functions everywhere, the function is not continuous at \(0\) due to division by \(0\). But does the function have a limit?

First, we note that this function is an even function, so if you are being thorough and worrying about getting close from the left and the right, you need only check one side.

Next, we define some values getting close to \(0\):

hs = [ (1/10)^i for i in 1:10 ]
## [0.1,0.010000000000000002,0.0010000000000000002,0.00010000000000000005,1.0000000000000003e-5,1.0000000000000004e-6,1.0000000000000004e-7,1.0000000000000004e-8,1.0000000000000005e-9,1.0000000000000006e-10]

These are just powers of 1/10 approaching 0.

The hs vector could be defined more succinctly through different ways: logspace(-1, -10, 10), or with (1/10) .^ (1:10), but not either of 10.^(-1:10) or even 10.0 .^ (-1:10), but could be 10.0 .^-(1:10). Adding a single-purpose function, such as logspace, to our vocabulary, or having to wrestle with these order of operations issues and integer versus floating point issues, leave us suggesting the use of a comprehension for this task.

We can then just look at the values of \(f(x)\):

f(x) = sin(x)/x
ys = [ f(x) for x in hs ]
## {0.9983341664682815,0.9999833334166665,0.9999998333333416,0.9999999983333334,0.9999999999833332,0.9999999999998334,0.9999999999999983,1.0,1.0,1.0}

Alternatively, we can use map to create the ys:

ys = map(f, hs)
## [0.9983341664682815,0.9999833334166665,0.9999998333333416,0.9999999983333334,0.9999999999833332,0.9999999999998334,0.9999999999999983,1.0,1.0,1.0]

It is easier to see what is happening by arranging the \(x\) and \(y\) values the two vectors (hs and the output together) into a table to see a bit better using the following array notation to horizontally combine the column vectors:

[hs ys]
## 10x2 Array{Float64,2}:
##  0.1      0.998334
##  0.01     0.999983
##  0.001    1.0     
##  0.0001   1.0     
##  1.0e-5   1.0     
##  1.0e-6   1.0     
##  1.0e-7   1.0     
##  1.0e-8   1.0     
##  1.0e-9   1.0     
##  1.0e-10  1.0     

As the values get close to 0, this limit is clearly 1.

Example:

Limits at c

Finding a limit at some point \(c\), which is non-zero, is not much more difficult. The ingredients involve numbers close to \(c\) -- and not \(0\) -- but we can reuse our small values, hs, by simply adding them to \(c\).

For example, to compute numerically (a tedious problem to do algebraically)

\[ \lim_{x \rightarrow 25} \frac{\sqrt{x} - 5}{\sqrt{x-16} - 3} \]

we have the following:

f(x) = (sqrt(x) - 5) / (sqrt(x-16) - 3)
c = 25;
xs = c + hs;            ## numbers close -- but not equal -- to c
ys = map(f, xs);
[xs ys]
## 10x2 Array{Float64,2}:
##  25.1     0.601062
##  25.01    0.600107
##  25.001   0.600011
##  25.0001  0.600001
##  25.0     0.6     
##  25.0     0.6     
##  25.0     0.6     
##  25.0     0.6     
##  25.0     0.6     
##  25.0     0.600016

A quick investigation of the table demonstrates the limit should be \(0.6\).

Example:

The slope of the secant line

A very important limit in calculus is the derivative formula, written here to emphasize the secant line aspect:

\[ \lim_{x \rightarrow c} \frac{f(x) - f( c)}{x-c}. \]

Let's take \(c = 1\) and \(f(x) = x^x\) and compute the limit above:

f(x) = x^x
c = 1;
xs = c + hs;
ys = [(f(x) - f(c))/(x-c) for x in xs];
[xs ys]
## 10x2 Array{Any,2}:
##  1.1      1.10534
##  1.01     1.01005
##  1.001    1.001  
##  1.0001   1.0001 
##  1.00001  1.00001
##  1.0      1.0    
##  1.0      1.0    
##  1.0      1.0    
##  1.0      1.0    
##  1.0      1.0    

This the right limit as we only looked at values bigger than \(c\). A left limit is similarly done, though using xs = c - hs. Doing so will show there is no difference as \(x\) gets close to \(c\), and we take this is evidence that the limit is \(1\).

Practice

Question

Find the limit as \(x\) goes to \(2\) of

\[ f(x) = \frac{3x^2 - x -10}{x^2 - 4} \]

Question

Find the limit as \(x\) goes to \(-2\) of

\[ f(x) = \frac{\frac{1}{x} + \frac{1}{2}}{x^3 + 8} \]

Question

Find the limit as \(x\) goes to \(27\) of

\[ f(x) = \frac{x - 27}{x^{1/3} - 3} \]

Question

Find the limit of as \(x\) goes to \(0\)

\((1+x)^{1/x}\)

Question

Find the limit as \(x\) goes to \(\pi/2\) of

\[ \frac{\tan (2x)}{x - \pi/2} \]

Question limit properties

There are several properties of limits that allow one to break down more complicated problems into smaller subproblems. For example,

\[ \lim (f(x) + g(x)) = \lim f(x) + \lim g(x) \]

is notation to indicate that one can take a limit of the sum of two function or take the limit of each first, then add and the answer will be unchanged, provided all the limits in question exist.

Use one or the either to find the limit of \(f(x) = \sin(x) + \tan(x) + \cos(x)\) as \(x\) goes to \(0\).

Question From Strang, attributed to Stein

Look at the figure of a sector of a circle of radius 1 and the subtended section.

Embedded Image

Let \(f(\theta)\) be the area of the triangle and \(g(\theta)\) the shaded region. What is the limit

\[ \lim_{\theta \rightarrow 0+} \frac{f(\theta)}{g(\theta)}? \]

Question

Does this function has a limit as \(h\) goes to \(0\) from the right (assume \(h>0\))?

\[ \frac{h^h - 1}{h} \]

Graphical approach

The graphical approach to investigating limits is simple: make a graph near \(c\) and investigate what becomes of \(f(x)\) near \(x=c\). As with finding zeros, the graphical approach is good for getting a quick estimate of an answer, and when more precise answers are needed a numeric approach can be used.

To illustrate, to look to see if \(f(x) = sin(x)/x\) has a limit we simply graph near \(0\). To avoid \(x=0\), we plot from \((0.01, 1/2)\) via:

f(x) = sin(x)/x
plot(f, 0.01, 1/2)              ## avoids 0

We can see that the value of the limit is near 1, as that is what happens to \(f(x)\) as \(x\) gets close to 0.

Removable singularities

Some limit problems involve functions with a removable singularity. That is, you can redefine the function at a point to make it continuous at that point. A typical example is \(f(x) = \sin(x)/x\) at \(0\). This function is not defined for \(x=0\). However, we can define it via \(f(0)=1\). As we saw this was the limit at 0. This will make \(f\) a continuous function, so \(x=0\) is a removable singularity.

For functions with removable singularities a graph may not even show the issue. This is because we ultimately represent a graph as nothing more than a dot-to-dot plot and if those "dots" don't happen to include the singularity, the graph won't see it.

Inf and NaN

Some computations end up being valid floating point values, but not real numbers. These are NaN and Inf. We can make these numbers easily enough:

(1/0, 0/0)
## (Inf,NaN)

These more or less match what happens in the limit (\(0/0\) is indeterminate, \(1/0\) is infinite, though we usually have a sign, e.g. \(1/x\) at \(0\). These values can be problematic when graphing. (Converting them to NaN prior to plotting can often avoid the issue.)

Redefining a function

Using the ternary operator allows us to easily make a continuous function from one with a single removable singularity:

f(x) = x == 0 ? 1 : sin(x)/x

Limits of ratios

The interesting thing about this limit is the both \(\sin(x)\) and \(x\) go to zero as \(x\) goes to 0, so the ratio may be undefined. It can be instructive to look at each term at separately and compare. Graphically we can do this by layering two graphs:

plot([sin, x->x], -1/2, 1/2)

We see that the line \(y=x\) is a very good approximation for \(\sin(x)\) near \(0\), so it makes sense that their ratio is \(1\).

Practice

Question

Graphically look at the following limit

\[ \lim_{x \rightarrow -2} \frac{x}{x+1} \frac{x^2}{x^2 + 4} \]

What is the value?

Question

Let \(f(x) = sin(x)^sin(x), \quad x > 0\). This function has a right limit at 0 which you can see graphically. Plot \(f\) over an appropriate domain to find this answer:

Question

Let \(f(x) = 1 - \cos(x)\) and \(g(x) = x^2/2\). Graph both \(f(x)\) and \(g(x)\) on the same graph over a domain containing \(0\). Use the graph to estimate

\[ \lim_{x\rightarrow 0} f(x)/g(x). \]$

Question

Let \(f(x) = \sin(x)\) and \(g(x) = x^2\). Plot both \(f\) and \(g\) on the same graph with a domain containing \(0\). Based on the graph answer:

Question

The function \(f(x) = (x^2 - 4)/(x-2)\) has a removable singularity at \(x=2\). What value would you redefine \(f(2)\) to be, to make \(f\) a continuous function?

Question

The highly oscillatory function

\[ f(x) = x^2 \sin(\frac{1}{x}) \]

has a removable singularity at \(x=0\). What value would you redefine \(f(0)\) to be, to make \(f\) a continuous function?

Question No limit

Some functions do not have a limit. Make a graph of \(\sin(1/x)\) from \(0.0001\) to \(1\) and look at the output. Why does a limit not exist?

Question Squeeze theorem

Let's look at the function \(f(x) = x \sin(1/x)\). A graph around \(0\) can be made with:

f(x) = x == 0 ? NaN : x * sin(1/x)
plot(f, -1/4, 1/4)

This graph clearly oscillates near \(0\). To the graph of \(f\), add the graphs of both \(g(x) = |x|\) and \(h(x) = - |x|\). From this graph it is easy to see by the "squeeze theorem" that the limit at \(x=0\) is \(0\). Why?

Limits at infinity

The concept of a limit can be extended. For example, the concept of a limit as \(n\) goes to infinity for some sequence of values parameterized by \(n\).

Let's compute \(\pi\) as the circumference of a circle of radius 1 by approximating the circle by an inscribed regular polygon with \(n\) sides. The legnth, \(k\), of a given side is

\[ k = 2 \sin(\frac{2\pi}{2n}) \]

As can be seen by taking the isoceles triangle with angle \(2\pi/n\) and dropping a horizontal with opposite length 1/2 the entire length.

Embedded Image

Thus the total length is

\[ l_n = n \cdot 2 \sin(\frac{2\pi}{2n}) \]

As \(n\) goes to \(\infty\) this should go to the circumference of the circle of radius 1 or \(2\pi\). (This was used as early as the Egyptians with an octagon to approximate \(\pi\).)

Let's see.

n_to_infinity = [10^i for i in 1:15]
## [10,100,1000,10000,100000,1000000,10000000,100000000,1000000000,10000000000,100000000000,1000000000000,10000000000000,100000000000000,1000000000000000]
l(n) =  n * 2sin( (2pi)/(2n) )
[l(n) for n in n_to_infinity]
## {6.180339887498948,6.282151815625658,6.283174971759127,6.28318520382533,6.283185306146043,6.283185307169251,6.283185307179482,6.2831853071795845,6.283185307179586,6.283185307179585,6.283185307179586,6.283185307179586,6.283185307179586,6.283185307179587,6.283185307179586}

To compare to \(2\pi\) we can divide instead:

[ l(n)/(2pi) for n in n_to_infinity ]
## {0.983631643083466,0.9998355147105487,0.9999983550667451,0.9999999835506593,0.9999999998355065,0.9999999999983552,0.9999999999999835,0.9999999999999997,1.0,0.9999999999999999,1.0,1.0,1.0,1.0000000000000002,1.0}

As the ratio has a limit of \(1\) we conclude that \(l(n)\) goes to \(2\pi\).

There isn't much difference to the above than what we did before, except we take increasing larger values for \(n\), not values getting close to 0 for \(x\).

Practice

Question

Use an inscribed octagon to approximate \(\pi\) (e.g., take \(n=8\) and look at \(l(n)/2\), with \(l\) defined above). What do you get?

Question

Archimedes used interior \(96\)-gons and exterior ones to estimate \(\pi\) from above and below. The circumference of the exterior polygon is:

L(n) = n*2*tan((2*pi)/(2*n))

What is the difference between \(L(96)/2\) and \(l(96)/2\)?

Question (The Basel problem)

Euler looked at \(\sin(x)/x\) in his solution to the "Basel" problem, that is finding the sum of:

\[ 1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2} + \cdots = \lim_{n \rightarrow \infty} \sum_n \frac{1}{i^2}. \]

Euler rewrote a series expansion for \(\sin(x)/x\) to get his famous answer of \(\pi^2/6\). Using this function

basel(n) = sum( [1/i^2 for i in 1:n] )

how big must \(n\) be so that pi^2/6 - basel(n) < 1e-3?

Question (and why not call it b?)

Jacob Bernoulli looked at the limit

\[ \lim_{n \rightarrow \infty} (1 + \frac{1}{n})^n \]

What should he have found?

Question

The sum \(1 + 1/2 + 1/3 + 1/4 + \cdots\) does not converge, however this function:

euler_mascheroni(n) = sum([1/i for i in 1:n]) - log(n)

does have a limit as \(n\) goes to \(\infty\). Use:

[euler_mascheroni(i) for i in (10.0).^(1:7)]
## [0.6263831609742083,0.5822073316515288,0.5777155815682038,0.5772656640681948,0.5772206648931952,0.5772161649014453,0.5772157149015271]

To find an answer to \(6\) decimal points.

The method of finding a limit numerically, as described above is problematic for many reasons:

  • as seen, round off error can be a problem.

  • it can be hard to identify the answer, as it is a decimal approximation.

As to the first point, make sure that a graph can confirm any numeric analysis. For the latter, we present a function that can estimate a rational answer for a numeric one using continued fractions.

## find continued fraction expansion
function continued_fraction(x; cycles=10, tol=1e-5)
    out = Int[]

    x0 = x
    b = floor(x0)

    for i in 1:cycles
        push!(out, b)
        delta = x0 - b
        if abs(delta) < tol
            return(out)
        end
        r = 1/delta
        x0 = r
        b = floor(x0)
    end
    out
end

## return rational from expansion
function find_rational(xs)
    a = pop!(xs) // 1
    while length(xs) > 0
        a = pop!(xs) + 1/a
    end
    a
end

## give fraction approximating numeric value
frac(x;kwargs...) = find_rational(continued_fraction(x;kwargs...))

This function is based on a description of continued fractions. It basically approximates a floating point value by a rational number. To see that it works, we have:

A(x) = (1-cos(x))/x^2
xs = [1/10^i for i in 1:5];
ys = map(A, xs);
map(frac, ys)
## [13564615//27151849,25979871//51960175,1//2,1//2,1//2]

which has us believe (properly) that the limit is \(1/2\).

Similarly, we have

A(x) = sin(x)/x
ys = map(A, xs)
## [0.9983341664682815,0.9999833334166665,0.9999998333333416,0.9999999983333334,0.9999999999833332]
map(frac, ys)
## [992161057//993816590,57524028879//57524987624,1//1,1//1,1//1]

giving a limit of \(1\).

Some limits involve irrational numbers, such as \(e\) or \(\pi\). To see that no good fraction estimate comes from a limit involving \(e\), we have

A(n) = (1 + 1/n)^n
ys = [A(i) for i in 1:8]
## [2.0,2.25,2.3703703703703702,2.44140625,2.4883199999999994,2.5216263717421135,2.546499697040712,2.565784513950348]
map(frac, ys)
## [2//1,9//4,64//27,625//256,7776//3125,87508//34703,96795//38011,20457//7973]

However, if one knows the limit may involve \(\pi\), it can be divided out. For example, the limit of \(1/1^2 + 1/2^2 + \dots\) is \(pi^2/6\), according to Gauss. We see it takes a big \(n\) (1e6) to get this:

A(n) = sum([1/i^2 for i in 1:n])
ys = [A(10^i) for i in 1:7]
## [1.549767731166541,1.634983900184893,1.6439345666815595,1.6448340718480596,1.644924066898226,1.6449330668487256,1.644933966848231]
map(frac, ys/pi^2)
## [50065//318836,3971//23971,862949//5180842,655225//3931589,31226298//187358927,1//6,1//6]

Practice

Question

What is the limit as \(t\) goes to \(0\) of \(f(t) = (7t^2)/sin^2(8t)\)? (it is a fraction).

Question

What is the limit as \(x\) goes to \(0\) of

\[ f(x) = \frac{\sin(3x)\sin(2x)}{x\sin(5x)}? \]

Question

Find the limit as \(x\) goes to \(0\) of

\[ f(x) = \frac{\sqrt{9 + x} - \sqrt{9 - x}}{x} \]

Floating point uncertainties

A related limit to \(\sin(x)/x \rightarrow 0\) is that

\[ \lim_{x \rightarrow 0} \frac{1-\cos(x)}{x^2} = \frac{1}{2}. \]

Related in that they are used to approximate related functions near \(0\): \(\sin(x) \approx x\) and \(1 - \cos(x) \approx (1/2) x^2\). A graphic shows the latter approximation:

plot([x -> 1 - cos(x), x -> x^2/2], -pi, pi) 

Note in the figure how the parabola tracks the shape of the transformed cosine function very well near \(x=0\) but not necessarily far from \(0\).

Numerically, we have a different story. We see that there are limitations to our approach to finding limits that show up in analyzing this.

Here is a first attempt:

f(x) = (1 - cos(x))/x^2
xs = [ (1/10)^i for i in 1:10 ];
ys = map(f, xs);
[xs ys]
## 10x2 Array{Float64,2}:
##  0.1      0.499583
##  0.01     0.499996
##  0.001    0.5     
##  0.0001   0.5     
##  1.0e-5   0.5     
##  1.0e-6   0.500044
##  1.0e-7   0.4996  
##  1.0e-8   0.0     
##  1.0e-9   0.0     
##  1.0e-10  0.0     

We notice something odd -- the values ultimately become \(0\) when we just said they should become \(1/2\). Atleast for most of the output things look okay, but then something goes terribly wrong.

Let's look at the two pieces. First the denominator:

[ x^2 for x in xs]'
## 1x10 Array{Any,2}:
##  0.01  0.0001  1.0e-6  1.0e-8  1.0e-10  …  1.0e-14  1.0e-16  1.0e-18  1.0e-20

There is nothing here to speak of. Julia's Float64 type follows the IEEE 754 floating point standard. Of the 64 bits, 1 is used for the sign (plus or minus) and 11 are used to store the exponent. See this informative blog post for more Anatomy of a floating-point number. As \(2^{11} = 2048\) roughly half are used for negative exponents, the other half for positive exponents. The range is from 1e-1022 to 1e1023. We aren't even close to the lower range with 1e-20.

Now, let's look at the numerator. The issue is the difference between \(\cos(x)\) and 1. Let's look with the small values printed:

numerator = [ 1-cos(x) for x in xs ];
[xs numerator]
## 10x2 Array{Any,2}:
##  0.1      0.00499583 
##  0.01     4.99996e-5 
##  0.001    5.0e-7     
##  0.0001   5.0e-9     
##  1.0e-5   5.0e-11    
##  1.0e-6   5.00044e-13
##  1.0e-7   4.996e-15  
##  1.0e-8   0.0        
##  1.0e-9   0.0        
##  1.0e-10  0.0        

The values become \(0\) and are not just numbers close to \(0\). Hence, when dividing by even the smallest of numbers, the answer is simply \(0\).

In general, we add to our few rules of thumb for computation with floating-point numbers:

If we subtract two like-sized quantities our answer may have dramatically reduced precision.

In this specific case by the time we get to \(10^{-8}\), the difference between \(\cos(x)\) and \(1\) is looking to be around 5e-17. However, in floating point representation there are fundamental limits to how close different things can be. Of the 64 bits representing a number, 52 are used for the precision. (A number, \(s \cdot p \cdot 10^e\), is represented with a sign, the precision and an exponent.) This puts the restriction on what can be represented and ultimately gives a granularity if one looks too closely -- without working harder. In this particular case, the floating point approximation for \(1\) and that for \(\cos(x)\) are eventually the same value -- even if they are different mathematically.

The value

eps()
## 2.220446049250313e-16

measures how much larger the next representable number after \(1.0\) is from \(1.0\). (Of course, this has no answer in the real numbers, but floating point is a discrete approximation.)

What has happened with \(1-\cos(x)\) is the mathematical value of \(\cos(x)\) gets too close to 1 when \(x = 10^{-8}\) and so the difference is treated as \(0\) as the two numbers have the same representation. Since \(0\) divided by any non-zero number is zero, we get a reasonable answer for the at-first unexpected behavior.

So be careful, we can get too close when looking "close."

Investigating how numbers are represented in floating point: prevfloat

Julia has some functions for working with floating point numbers. Some of you might be thinking that since eps is the difference to the next representable number larger than 1, what is the same for the next representable number less than one. The prevfloat value gives this. Here we see the issue between \(10^{-7}\) and \(10^{-8}\):

prevfloat(1.0) < cos(1e-7)
## false
prevfloat(1.0) < cos(1e-8)
## true

Floating point approximations differ depending on the location. Look at the difference between 1.0- prevfloat(1.0) and nextfloat(1.0) - 1. Then look at how small nextfloat(0.0) is.

Practice

Questions

is eps() == nextfloat(1.0)?

Question (bits etc)

The bits function prints the bit representation of a number. For real numbers, the first bit is the sign, the next 11 the exponent and the last 52 are for the precision. Let's look at the values for a few:

bits(cos(1e-7))
## "0011111111101111111111111111111111111111111111111111111111010011"
bits(cos(1e-8))
## "0011111111110000000000000000000000000000000000000000000000000000"
bits(1.0)
## "0011111111110000000000000000000000000000000000000000000000000000"

We see here how two different real numbers have the same floating point representation.

For fun, what is the difference between bits(-1.0) and bits(1.0)?

Question (bits etc.)

What is the difference between bits(NaN) and bits(Inf)? (These two are coded specially in floating point.)