So, the other day I was messing around with some really tiny numbers in my code, and I stumbled upon something interesting. I was dealing with this value, like, 1.e-28, you know? Super small.

At first, I just typed it in, 1.e-28, and figured, “Yeah, that should work.” I mean, it’s how I usually write scientific notation, and it seemed to be doing the trick. But then I got curious about how the computer actually sees this number.
Experiment Time!
I started playing around with different ways of representing this number. I went the obvious route first:
- Just typing
1.e-28
. Seemed straightforward enough.
I ran a few simple calculations, multiplying and dividing it by other numbers, just to see if anything weird happened. For the most part, it behaved as expected. No surprises there.
Then, I made a program to print out the result in my calculations. I keep runing it and playing with the input.
I found the result is all right.

It’s good to keep in mind that when you’re working with numbers this small, you’re on the edge of what the computer can accurately represent.