When we are looking at two-dimensional optimization algorithms, there are a variety of options. The first is the bisection method. I like the bisection method, because it makes a delightful use of the intermediate value theorem. We can use the fact that continuous functions take on all of the values between two points. They may take on a value more than once, but that is someone else’s problem. Formally stated, the intermediate value theorem says:
If [latex]f[/latex] is continuous on a closed interval [latex][a,b][/latex], and [latex]c[/latex] is any number between [latex]f(a)[/latex] and [latex]f(b)[/latex] inclusive, then there is at least one number [latex]x[/latex] in the closed interval such that [latex]f(x)=c[/latex].
As a root-finding algorithm, bisection takes advantage of the fact we know the value crosses 0. Take, for example, [latex]f(x) = x^2 - 1[/latex]:
We can see that the function crosses zero. Of course, what we see is irrelevant, what is important is the image tells us a story. Going back to our theorem, we can say let [latex]a = 0[/latex] and [latex]b = 2[/latex]. Then we know [latex]f(a) = -1[/latex] and [latex]f(b) = 3[/latex]. It is a continuous function, so we know there has to be some value of [latex]x[/latex] such that [latex]f(x) = 0[/latex]. We short cut this process by just narrowing the space between [latex]a[/latex] and [latex]b[/latex] until we are satisfied our solution is close enough.
But there’s a problem here. What if there are no negative values of the function? Take, for example, [latex]f(x) = (x - 1)^2[/latex]:
We can solve this problem with Newton’s method. Of course, the problem with Newton’s method is that you need to know the derivative of [latex]f(x)[/latex], a priori. Finding the derivative of [latex]f(x) = (x - 1)^2[/latex] is easy. Finding the derivative of [latex]f(x) = x^x[/latex] is hard. I mean, I know the answer, but don’t ask me how to figure it out. That’s what undergrads are for. So we can solve this problem, with the secant method. It just estimates the derivative along the way. What we learn here is that no method works perfectly in every scenario. That’s the hardest part of numerical analysis: understanding the computer can do all the work.