This is not a complete treatment of l'Hôpital's rule, it's just a collection of supplementary remarks. It really is assumed that you have read the textbook section first!
Recall, in a nutshell, what l'Hôpital tells us: Suppose we want to calculate the limit of f(x)/g(x) as x approaches some value a (a may be infinity), and suppose that both f(x) and g(x) approach zero in that limit, or both approach infinity. Then the limit is the same as the limit of f '(x)/g'(x) -- which may be easier to calculate.
Warning: (.dvi version) (.pdf version) <--(READ ME)
lim_{x->0+} (cos x)/x
is positive infinity, because the numerator approaches 1 while the denominator approaches 0. If we incorrectly apply l'Hôpital's rule, we get
lim_{x->0+} (- sin x)/1 = 0.
Example 1: The limit at infinity of
(x^{2} - 1)/(2x^{2} + 1)
can be correctly evaluated by two successive applications of l'Hôpital's rule; but it can also be found by a rule we learned earlier: Divide both numerator and denominator by the highest power (x^{2}) and take the limit directly, getting 1/2 very quickly.
Example 2: Consider the limit at 0 of
(cos x - 1)/x.
Since top and bottom both approach 0, it is permissible to use the rule and get
lim_{x->0} (- sin x)/1 = 0.
However, it is less mysterious and more instructive to recall (from the section on differentials) that the cosine function has the quadratic approximation
cos x ÷ 1 - x^{2}/2
and therefore the numerator behaves near 0 like - x^{2}/2. This obviously vanishes faster than the denominator, x, so the limit is zero.
Clearly, the first two items on this list are absolute prohibitions, while the last one is merely friendly advice. Many students overuse l'Hôpital's rule, relying on it as a "black box" when they would learn much more (and solve the problems equally fast) by just taking a close look at, and comparing, the behavior of the numerator and denominator as x -> a. In particular, you should first ask yourself whether you know the linear or quadratic approximation of the numerator or denominator around x = a. If so, then it will probably become clear that the fraction is approximately equal to a constant times some power of (x - a), and the limit is infinite, finite, or zero depending on whether the exponent is negative, zero, or positive (see Example 2 above). Similarly, when a limit is taken at infinity, ask whether the numerator or denominator "behaves like" a certain power of x as x becomes large; the ratio of two powers approaches an obvious limit at infinity, depending on the two exponents (as in Example 1).
As homework you have a number of limits of ratios of exponentials, logarithms, and ordinary powers, which you should evaluate for practice. After awhile, though, the results of such calculations become very predictable. They can be summarized in a list of general conclusions:
The limit (as x approaches a) of f(x)^{g(x)} can't always be found just by taking the limits of f and g individually -- just as the limit of f(x)/g(x) is not the quotient of the individual limits if those limits are both 0 or infinity. To see when and why there is a problem, note that the logarithm of f(x)^{g(x)} is g(x) ln f(x), and consider the limit of that. If g approaches 0 and the logarithm approaches positive or negative infinity, or if g approaches infinity and the logarithm approaches 0, then we have an indeterminate form of the type "0 times infinity". (What do we do then?) These three cases correspond, back in the original function, to
The rest of this page is just an interesting side remark, not important to the course.
You may be surprised to see the second item in the list, since 0^{0} at first sight looks like a fairly tame object. In fact, however, if you take a look on any given day at an Internet newsgroup devoted to discussion of mathematics, you are likely to find a lively debate going on about whether 0^{0} is equal to 1, or is undefined. (Sometimes these arguments are started by a "crank" who charges that the mathematics profession is covering up some embarrassing scandal associated with the meaning of 0^{0}.)
To see why there is a problem, note first that
Argument in favor of defining 0^{0} to be 1: There are many useful formulas involving expressions of the form x^{y} that remain meaningful and true when x and y equal 0 if 0^{0} is intepreted as 1. The simplest example is the binomial formula
(a + b)^{2} = a^{2}b^{0} + 2a^{1}b^{1} + a^{0}b^{2}.
A related example is Taylor's theorem, which involves powers of (x - a).
Argument in favor of leaving 0^{0} undefined: If f(a) = 0 = g(a), and both functions are continuous, then we would be tempted to jump to the conclusion that
lim_{x -> a} f(x)^{g(x)} = 1,
forgetting that 0^{0} really is an indeterminate form. To prevent confusion, we outlaw 0^{0}.
More information about this topic from the sci.math Frequently Asked Questions list
Something to think about: Why don't we define 0/0 to be 1, instead of insisting that 0/0 is undefined? (an answer)