It indeed appears to be the case.
In the xor case, that is:
$$y = \bigoplus\limits_{i=0}^{m-1} F(i, x_i)$$
Gaussian Elimination (which is an $O(n^3)$ algorithm) can be used to find collisions, preimages (and including the checking of the existence of such). This works no matter what the outputs of the $F$ functions are (and increasing the size of the bitstrings only increases $n$). It does assume that $m$ is moderately large (but then, if we're considering a hash of a potentially large input, this is the case).
On the other hand, in the addition case:
$$y = \sum\limits_{i=0}^{m-1} F(i, x_i)$$
It turns out to be an NP-hard problem (!). That is, if you can solve this quickly in a generic way (where "quickly" means "in polynomial time") for any instance of this problem, you can solve any problem within NP quickly. Hence, we do not believe that there is a fast (polynomial time) algorithm that works on all inputs, and that any generic algorithm would take exponential time in the worse case.
That said, the proof of NP-hardness involves very specific $F$ functions (and a very large modulus), and so might not reflect the actual hardness in the cases we are interested in. On the other hand, at first glance, it doesn't appear likely that random inputs (such as a real-world $F$ implementation would approximate) would lend themselves to any faster time algorithm than the specific inputs used in the proof. Hence, our guess is that we can't do better than exponential time in the cases we're interested in.
Now, Wagner's algorithms are still exponential, but reduce the exponent significantly. This is a great help in practice (and must certainly be taken into account when sizing the problem to be at a certain hardness), however it'll still be slower than the polynomial time we got in the xor case.