Given an acceptable random number generator, how does one generate random unit vectors in n-dimensional space? Equivalently, how does one find a random point on the unit sphere in n dimensions? I had to solve this when trying to find the major axis of an ellipse in n-dimensional space. I had to solve it again when I was simulating light bouncing off of odd-shaped mirrors. It seems to be a common problem.

Suppose you know the distribution of coordinates in n-dimensions, and the function G(i,x) will map a random real number x in -1..1 to a coordinate with the appropriate distribution in n-dimensional space. I suspect

```  G(n,x) = sin(integral((1-x2)-(n-2)/2))
```
because the cross-section of a unit sphere at x is a unit sphere in the next dimension down, and its volume is K*(1-x2)(1/2)(n-2). That means G(2,x) = sin(pi*x/2) and G(3,x) = sin(arcsin(x)) = x, and I can't figure out G(4,x). That integral has a range of [-infinity,infinity] and I don't know how to scale it to [-pi/2,pi/2]. At any rate, suppose you know G(n,x) for all n. Then to generate a random unit vector, do
```  if (flip coin, heads?)  v[0] =  1.0;
else                    v[0] = -1.0;
for (i=1;  i<n;  ++i)
v[i] = G(i+1, random real number in [-1,1]);
scale = 1.0;
for (i=n;  i--;  )
{
temp = sqrt(1 - v[i]*v[i]);
v[i]  = v[i] * scale;
scale = scale * temp;
}
```

Steve Rayhawk pointed out that if you choose n numbers with a normal distribution, treat those as coordinates, and normalize the resulting vector into a unit vector, this chooses a uniformly distributed unit vector in n dimensions.

The proof he supplied manipulated the formula for a normal distribution, showing that the product of n normal distributions is itself a normal distribution based on the distance of the point with those coordinates from the origin.

I copied the code below from someone who copied it from Knuth. It's a way to find two normally distributed coordinates, given a rand() function that returns values in the interval [0,1). I haven't confirmed that it is correct.

```  do {
v1 = 2*rand()-1;
v2 = 2*rand()-1;
r = v1*v1+v2*v2;
} while (r > 1 || r == 0);
r = sqrt(-2*ln(r)/r);
v1 *= r;
v2 *= r;
```

If you would like to avoid trigonometry and logarithms, here's an entirely different approach. It's like a random number generator, except it produces a sequence of vectors. Neighbors in the sequence aren't entirely independent, but vectors further apart than 8 pass my (pretty lame) tests. The long-run distribution is perfectly uniform.

The internal state is two vectors, x and y, which you have to initialize beforehand to be some unit vectors.

```  copy x to z;
x = x + y;
x = x / ||x||;
for (i=n;  --i;  )
{
swap(x[i], x[rand()%i]);
if (rand()%2) x[i] = -x[i];
}
y := z;
copy z to y;
```
Adding x and y produces coordinates that aren't x and aren't y, although the largest coordinate may still be recognizably related. The swaps and negates are like applying one of a class of 2nn! (well distributed) rotations, which makes the general direction independent of x and y but preserves coordinates.

If you need lots of unit vectors, this is the way to go. But be careful that your application doesn't care that neighboring vectors are somewhat related. If it cares, take every 10th vector instead (which makes this 10 times slower) or figure out those G functions above.