David Smith and I are now talking to each other in blog posts and it is only a little weird. Also, I’ve been traveling and am a bit behind. In a comment on this post, he notes this:
I suspect the reason why R Core adopted the 0^0=1 definition is because of the binomial justification, R being a stats package after all.
I can’t think of any defense for NaN^0=1 though…
Well, it turns out there’s a good reason. If we go back C, and try an experiment, we can observe the following example produces these results:
Compiling and executing on my Intel-based Mac shows that NaN^0 = 1, in agreement with R. But why? It goes back to IEEE 754, the standard for floating point arithmetic. In section 9.2.1, it is quite clear what the behavior of the power operation should be:
pow(x, ±0) is 1 for any x (even a zero, quiet NaN, or infinity)
So the defense of NaN^0 = 1 is that the hardware ate the value. Or, more explicitly, that’s what the standard says the result should be. What may be interesting is the behavior of NaN^0 on a VAX or some other hardware architecture which supports a non-IEEE floating point format. However, it may only be interesting for historical purposes.
Image by Bumper12 / Wikimedia Commons.