The property of the KL divergence, is that it has a "chain rule" which leads to tensorization i.e. a property in one dimension holding in multiple dimensions.
The TV distance, while being convenient because it's a distance (and having a nice formula in simple spaces) does not tensorize the way the divergences do. This is because, by Monge-Kantorovich duality (which is what Blackbird used in his answer in the statement above) the TV distance is a minimum under some coupling, of the size of the set where the coupled random variables differ. Certainly, assuming independence we get the trivial bound derived by Blackbird. However, because of the fact that there could be more "structure" to the couplings, and this structure could potentially be very complicated as the dimension increases (essentially, you have more coordinates to couple and play with), there is more "between the components" than there is in the components themselves. This is why one would expect just trivial bounds for the TV distance in larger dimensions.
However, with MK Duality, we convert the Bobkov-Gotze theorem (a generalization of Pinsker) to a "transportation cost inequality", which does tensorize. So in that case, bounds on the individual TV distances do give bounds on the big TV distance. Marton's theorem is the one I know here : find it in Van Handel's notes chaptet 4 section 4 : https://web.math.princeton.edu/~rvan/APC550.pdf
Yes, in general it is very important to choose which distance you work with when you are dealing with inequalities. The fact that TV doesn't tensorize serves to emphasize that.