This is a very misleading article.
Floats are not distributed evenly across the number line. The number of floats between 0 and 1 is the same as the number of floats between 1 and 3, then between 3 and 7 and so on. Quantising well to integers means that you take this sensitivity into account since the spacing between integers is always the same.
No, the number of floats between 0 and 1 is (approximately) the same as the number of floats between 1 and positive infinity. And this is the correct way for it work: 1/x has roughly the same range and precision as x, so you don't need (as many) stupid obfuscatory algebraic transforms in your formulas to keep your intermediate values from over- or under-flowing.
Floating points numbers have a fix precision mantissa, and a fixed precision exponent.
So you have xxxxx E xxx as an example of a 5 bit mantissa and 3 bit exponent.
You have 2^5 floating point numbers for each possible exponent.
So no, you're wrong. For exponent 0 you have 2^5, and for exponent 1, 10 and 11 you then have the same. The exponent 0b (0d) then contain the same number of possible floating mantissas as does 1b (1d), 10b (2d) and 11b(3d). Which means that there are as many mantissas between [0,1) as there are between [1,3)
why do you think the range [0,1) is represented by one exponent?
Because it is half the range expressed in 1 bit of exponent, the same way that [1,3) is half the range expressed in 2 bits of exponent. I'd used [0,2) and [2,4) but that would confuse people used to thinking in base 10, which includes the OP author apparently.