Generating random reals

Posted by Michał ‘mina86’ Nazarewicz on 26th of December 2016

A well known way of generating random floating-point numbers in the presence of a pseudo-random number generator (PRNG) is to divide output of the latter by one plus its maximum possible return value.

extern uint64_t random_uint64(void);

double random_double(void) {
	return random_uint64() / (UINT64_MAX + 1.0);
}

This method is simple, effective, inefficient and wrong on a few levels.

Discretisation

The ‘floating’ in ‘floating-point’ refers to decimal point’s ability to move around to scale number’s absolute precision while maintaining relative precision. This enables representation of numbers spanning a broad range of magnitudes. For example, an IEEE 754 double precision number can encode 2100 and 2-100 (but their sum is rounded to 2100).

This impacts how densely ‘representable’ integers are distributed on a number line. All positive integers less than 253 can be represented, but then only every second integer in [253, 254) range can; only fourth in [254, 255); and so on.

The effect is that when a random 64-bit unsigned integer is converted into a double, some numbers are more probably than others. For example, 263 is 2048 times more likely to be choose than 1.

This isn’t yet a reason to run in circles while screaming in panic. We quickly convince ourselves that (sufficiently large) ranges behave correctly. There may be fewer distinct values in the [2⋅262, 3⋅262) range but over all it’ll end up with the same number of ‘hits’ as the [0, 262) range.

0.9999993 0.9999999 1.0000000 1.0000001
Probability density of random_uint32() / (UINT32_MAX + 1.0f).

Except that it won’t…

Non-uniform distribution

The density would behave correctly if bits lost during conversion were zeroed. In that scenario, all integers from [263, 263+2047] range map to its beginning equating the probability of an integer from [0, 2047] being chosen.

The problem is that the default (and sometimes the only accessible) rounding mode is to round towards nearest. Because 263+2047 is closer to 263+2048 than to 263, it will get rounded to the former.

$ gcc -std=c99 -o rounding -x c - <<EOF && ./rounding
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>

int main(void) {
	double x = (UINT64_C(1) << 63) + 2047;
	uint64_t y = (uint64_t)x - (UINT64_C(1) << 63);
	printf("%" PRIu64 "\n", y);
	return 0;
}
EOF
2048

In other words, roughly half of the integers in [263, 263+2047] range end up mapped to 263+2048 instead of 263. Fortunately the latter gets some numbers from the previous range, [263-2048, 263-1], so it’s not totally extorted.

It still isn’t treated fairly. 263-2048 is a 63-bit number and the precision loss is ten bits (compared to eleven for 263+2048) which in turn means that only the top fourth of the [263-2048, 263-1] range maps to 263. Overall, 263 ‘gets’ roughly 1537 integers mapped to it while 263+2048 gets 2047.

This behaviour favours large numbers and penalises powers of two especially strongly. Even still, the bias isn’t huge and for many casual uses might be perfectly acceptable.

But wait, it gets worse!

Off-by-one

Choosing a random element from a list is often done with the help of a random non-negative real number less than one. It’s enough to multiply it by length of the list and truncate down to obtain a random index:

template<class T, size_t size>
T &choose(T (&arr)[size]) {
	return arr[(size_t)(random_double() * size)];
}

This is perfect except for a tiny issue where memory past the array may be accessed. In general the method is not guaranteed to work and N / (N + 1.0) is not always less then one.

RAND_MAX   / (RAND_MAX   + 1.0 ) = 0.999999999534339
RAND_MAX   / (RAND_MAX   + 1.0f) = 1
UINT32_MAX / (UINT32_MAX + 1.0 ) = 0.999999999767169
UINT32_MAX / (UINT32_MAX + 1.0f) = 1
UINT64_MAX / (UINT64_MAX + 1.0 ) = 1
UINT64_MAX / (UINT64_MAX + 1.0f) = 1

If a PRNG outputs integers wider than floating-point number’s precision (24 bits for single and 53 bits for double precision numbers), division result has to be rounded to fit. The default of rounding towards the nearest results in aberrations seen above.

This becomes a real bug if single precision arithmetic or a more capable pseudo-random number generator is used. For instance, xorshro128+ is trivial to implement and produces 64-bit outputs. More than a double precision IEEE 754 numbers can handle.

Rounding towards zero or minus infinity would avert the issue, but those modes are rarely implemented and even if that were available, switching mode just for the sake of random number generation may be unfeasible.

Solution

There are a faw ways to solve the issue. My favourite method which happens to be the fastest is to construct floating-point number by hand. This is much easier than it sounds:

static inline double make_double(uint64_t x) {
	double ret;
	x = (1023 << 52) | (x >> 12)
	memcpy(&ret, &x, sizeof ret);
	return ret - 1.0;
}

memcpy prevents any aliasing issues and GCC is good at optimising it. Speaking of which, if portability is an issue, number of bits in mantissa can be queried via DBL_RADIX macro from float.h header and if unknown floating-point representation is used, ldexp can be used instead.

That approach can be packaged into a C/C++ library easily and as promised is not only effective but also efficient:

=== Using division ===

real	0m16.183s
user	0m16.172s
sys	0m0.000s

=== Using bit manipulations ===

real	0m6.660s
user	0m6.656s
sys	0m0.000s

Benchmark’s source code can be found in random-reals git repository alongside utilities used to gather histogram shown in previous section.