65

You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.

The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length.

So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?

And if so, what is the smallest possible ratio?

Example 1 (unit is cm):

  • 1st cut: 50 : 50 ratio: 1
  • 2nd cut: 50 : 25 : 25 ratio: 2 - bad

Example 2

  • 1st cut: 40 : 60 ratio: 1.5
  • 2nd cut: 40 : 30 : 30 ratio: 1.33
  • 3rd cut: 20 : 20 : 30 : 30 ratio: 1.5
  • 4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad

Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.

Stenzel
  • 633
  • 3
    can we have more pieces of sausage, than students? – dEmigOd Aug 14 '18 at 09:43
  • First cut at 1/3, second cut at 2/3, bisection of the first chunk, bisection of the second chunk, bisection of the third chunk. Then you have six chunks and you may continue bisecting them. – Jack D'Aurizio Aug 14 '18 at 09:45
  • @JackD'Aurizio: As soon as you have bisected anything once, you now have two equally large chunks. When you bisect one of those you will have an $2:1$ ratio on your hands. – hmakholm left over Monica Aug 14 '18 at 09:47
  • @HenningMakholm: sure, I was just showing that a ratio $\leq 2$ is achievable. – Jack D'Aurizio Aug 14 '18 at 09:49
  • 2
    @JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have. – hmakholm left over Monica Aug 14 '18 at 09:53
  • I must be misunderstanding the problem, because the way I see it, there is no smallest possible ratio, and you can't guarantee anything. Let $a$ be the size of the first cut. Then if $N=$ a million billion trillion more students come in, the smallest piece will be at most $(1-a)/N$, and by picking large enough $N$ the ratio between smallest and largest (which is at least $a$) can be made arbitrarily small. – Jack M Aug 14 '18 at 11:37
  • 7
    @JackM You can cut segments that have been cut before, you aren't serving them as you cut them. – Mario Carneiro Aug 14 '18 at 13:10
  • 13
    You should add the implied condition that the sausage is only served after all students have entered. – Mindwin Remember Monica Aug 14 '18 at 17:09
  • 3
    Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules... – lmat - Reinstate Monica Aug 14 '18 at 18:46
  • 1
    @LimitedAtonement The variant where the sausages are served as they come in is quite different from this, and it is much harder to get absolute answers - it depends on your probability distribution of how many kids will come in. If you always assume that there are a fixed number of kids that will come after the current one, then exponential is the optimal. But if you assume that the number of kids that will come in is proportional to the number that have already arrived, the solution is much more equitable, with $n^{-1/2}$ decay instead of exponential. – Mario Carneiro Aug 15 '18 at 12:55

3 Answers3

64

TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution.

This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.

Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints:

$$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n<2a_{2n-1}$$

I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well).

First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$.

Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence

$$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$ (which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$.

If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.

There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$.

Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to $$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$

When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable: \begin{align} a_1&=\log_2(1+1/1)=1\\ a_{2n}+a_{2n+1}&=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\ &=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\ &=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\ &=\log_2\left[1+\frac1n\right]=a_n. \end{align}

As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then \begin{align} 2a_m&=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\ &\ge\log_2\Big(1+\frac2m\Big)>\log_2\Big(1+\frac2{2n}\Big)=a_n, \end{align}

so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.

It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):

  • $58:42$, ratio = $1.41$
  • $42:32:26$, ratio = $1.58$
  • $32:26:22:19$, ratio = $1.67$
  • $26:22:19:17:15$, ratio = $1.73$
  • $22:19:17:15:14:13$, ratio = $1.77$

If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:

                                  sausages

A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.

  • 1
    Wow, thank you very much! I must admit it will take me some time to fully understand this. – Stenzel Aug 14 '18 at 12:48
  • 25
    Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction. – GOTO 0 Aug 14 '18 at 18:09
  • 12
    @GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=n\phi\bmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed. – Mario Carneiro Aug 14 '18 at 18:40
  • @GOTO0 If you use the golden ratio spacing mentioned by MarioCarneiro, then the upper bound ratio is 2.618033.., which isn't too far off from a ratio of 2. Much quicker to calculate too – Ryan Aug 15 '18 at 03:03
  • @MarioCarneiro Another "advantage" of the golden ratio method is that at any step, you have at most three different lengths $a,\phi a,\phi^2a$ to deal with, and then either cut an $a$ piece into $\phi a$ and $\phi^2 a$ – Hagen von Eitzen Aug 15 '18 at 08:04
  • 1
    @Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So... – GOTO 0 Aug 15 '18 at 09:14
  • @GOTO0 The advantage of both the golden ratio and log distributions over bisection is that you don't have as much "bias" if you cut the process off at a non-power of two. This is more important for things like color wheel spacing where absolute position matters as well as interval size - you may notice that all the reddish colors are well separated but there's a big cluster around green for some reason. With golden ratio spacing the big and small chunks are distributed all around, so if you look at it visually it appears well distributed no matter when you cut it off. – Mario Carneiro Aug 15 '18 at 10:01
  • @MarioCarneiro True, that is why I like the $\log_2(1+1/n)$ distribution best. I just couldn't help noticing that many of the reported advantages of the golden ratio spacing can be outperformed with a trivial bisection cut. – GOTO 0 Aug 15 '18 at 10:23
  • May I ask how you generated the graphic? – cmh Aug 15 '18 at 14:21
  • 4
    @cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[{{#, 0}, {#, 1}} & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], {n, 0, 2^k + 10 k}], 1]], PlotRange -> {{0, 1}, {0, 1}}, AspectRatio -> 1/10, ImageSize -> 400], {k, -0.1, 10, .025}]] – Mario Carneiro Aug 15 '18 at 16:33
38

YES, it is possible!

You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement. So in fact, you must never have two equal parts. Make the first cut so that the condition is not violated, say $60:40$.

From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.) We construct a good cut that maintains this property.

So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/b\approx 1$). All you have to make sure is that

  • $a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round.
  • $a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round).

Then the condition is preserved. For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.

A. Pongrácz
  • 7,488
  • 2
    (+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable). – Adayah Aug 14 '18 at 19:12
13

You can't do better than a factor of $2$.

Assume to the contrary that you have a strategy such that the ratio between the largest and smallest remaining piece is always $<R$ for some $R<2$.

Then, first we can see that for every $\varepsilon>0$ there will eventually be two pieces whose ratio is at most $1+\varepsilon$. Otherwise the ratio between largest and smallest piece would be at least $(1+\varepsilon)^n$ which eventually exceeds $R$.

Once you have two pieces of length $a$ and $b$, with $a < b < (1+\varepsilon)a$, eventually you will have to cut one of them.

If you cut $a$ first, one of the pieces will be at most $\frac12 a < \frac12b$, so your goal has immediately failed.

However, if you cut $b$ first, one of the pieces will be at most $\frac12b < \frac{1+\varepsilon}2 a$, which means you've lost if we choose $\varepsilon$ to be small enough that $\frac{1+\varepsilon}2 < \frac1R$. And that is always possible if $R<2$.

  • 1
    Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted? – A. Pongrácz Aug 14 '18 at 10:00
  • 2
    @A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is. – hmakholm left over Monica Aug 14 '18 at 10:12
  • The answer is an elegant proof that the requirement of the ratio being always < 2 is not possible, so I accepted it. I still wonder what would be the maximum number of cuts until you ultimately fail though. – Stenzel Aug 14 '18 at 10:15
  • 3
    You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though. – A. Pongrácz Aug 14 '18 at 10:17
  • @Stenzel: You ought to at least upvote A. Pongrácz's answer, which answers the question you actually asked. – hmakholm left over Monica Aug 14 '18 at 10:18
  • 1
    Thank you! That would be nice. – A. Pongrácz Aug 14 '18 at 10:19
  • 1
    @Stenzel Sorry for pressing, but I think you still have it wrong. You wrote: "The answer is an elegant proof that the requirement of the ratio being always < 2 is not possible" No, that is not what was shown by Henning Makholm, but the impossibility of a stronger condition. – A. Pongrácz Aug 14 '18 at 10:24
  • 4
    @Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$. – CiaPan Aug 14 '18 at 11:02
  • Thank you all, I am still trying to run a numerical simulation to prove the solution of @a-pongrácz, so far it always exceeds 2 eventually, so I suspect I am still doing it wrong. As soon as I can get it to stay below or at 2, I'll change the accepted solution if that is possible. Also, I'm new to this site, so please forgive me if I am not familiar with the proper procedures here. – Stenzel Aug 14 '18 at 11:31
  • 2
    @Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer. – user21820 Aug 14 '18 at 15:08