I am writing a computer program that involves generating 4 random numbers, a, b, c, and d, the sum of which should equal 100.
Here is the method I first came up with to achieve that goal, in pseudocode:
Generate a random number out of 100. (Let's say it generates 16).
Assign this value as the first number, so a = 16.
Take away a from 100, which gives 84.
Generate a random number out of 84. (Let's say it generates 21).
Assign this value as the second number, so b = 21.
Take away b from 84, which gives 63.
Generate a random number out of 63. (Let's say it generates 40).
Assign this value as the third number, so c = 40.
Take away c from 63, which gives 23.
Assign the remainder as the fourth number, so d = 23.
However, for some reason I have a funny feeling about this method. Am I truly generating four random numbers that sum to 100 here? Would this be equivalent to me generating four random numbers out of 100 over and over again, and only accepting when the sum is 100? Or am I creating some sort of bias by picking a random number out of 100, and then a random number out of the remainder, and so on? Thanks.
