Captain Obvious
Others have given a good general answer, so I would just like to address this point in particular:
While it’s true we can’t begin to address every possible game, it’s also true that we certainly would not have to. Many possible sets of moves and game starts could be ruled out as obvious dead ends, certain losses, draws, etc. Aren’t we overestimating how much computing power is needed by considering all possible sets of moves?
If you follow chess problems, you will quickly notice that a good deal of them entail a brilliant sacrifice. And what makes a sacrifice brilliant is that it is not obvious that giving up a piece for position is, in fact, advantageous. Humans tend to think in terms of material, because that is the easiest metric to track. Part of what makes AlphaZero so powerful is that it has an ability to observe positional value much more accurately than humans can. So there are many board positions where you might say: "Oh, nobody would move the bishop there. That is an obvious blunder!" While AlphaZero says: "Actually, that move will lead to loss of black's queen in 7 moves with perfect play."
The very beginning and end of a game have many moves that are likely "obvious blunders" (for the beginning, this would be something like moving a piece back and forth to deliberately give up tempo). But for the middle of the game, hardly any move can be called an "obvious dead end", given how many viable game trees can follow from it. And the middle is exactly where the greatest branching factors occur, and where you need to trim the branching the most. That is to say, the places where you can trim the game tree most confidently are also the places where it matters the least. It's a bit like saying: "Can't we chop down this entire forest by cutting the saplings at the edges?"
Graph Drawing
An equally hard problem that is easier to visualize is the Travelling Salesman Problem. Pick a collection of points, and draw a Hamiltonian cycle through them of minimum length. If you play any video game with a gathering/collecting mechanic (like chopping down trees or collecting rocks, for instance), you are actually solving this problem in real time (or rather, you are drawing the graphs, but almost certainly not producing the minimal cycle). Here, too, it is tempting to say: "This problem isn't very hard. I can exclude a bunch of cycles like ones where I walk back and forth across the map to collect one item, when I could simply collect locally instead." And here, too, you will quickly find that eliminating these "no brainer" cycles doesn't get you very far. They encapsulate a locality heuristic (try to connect nearby points) which will get you decently far, but will by no means ensure an optimal cycle.
That being said, for many NP-complete problems, the majority of problem instances are relatively easy to approximate well (and TSP is one of them, due to things like locality heuristics). You might assume chess is not one of these problems, because all of the instances seem hard. But that is not true. To really explore the full space of chess games, we need move randomly. And indeed, any grandmaster playing against an opponent who chooses random moves should be able to absolutely dominate them without God's algorithm. In this sense, chess is also "easy".
The reason it seems hard is because the players themselves choose instances which are "pathological". So the moves which can be eliminated with simple heuristics are, by and large, the ones which occur most often in the "random forest" portion of the game space. They occur with trivial frequency in the "competitive" part of the space. Now, is there a comparable situation with graphs? Indeed, there is.
Easy graphs are ones with lots of clusters joined by a few "bridge" points. You can mostly solve it recursively by finding "local cycles" and connecting them at higher and higher levels. Hard graphs are ones with very little structure. The points are all at the same distance order, with only smaller perturbations between them. There are numerous local cycles which differ by only a small length, so picking the best one to connect at the higher level becomes quite intractable. You will have many good choices of entry and exit points, and indeed, there is no obvious structure to the entire graph that lets you partition it into locally manageable chunks in the first place. It will not take you long to convince you that such a graph is not solvable by hand, even with a small number of points. Finding a cycle is easy. Finding a minimal cycle is mind-numbingly hard.
Structure
The conclusion for us to take away is that simple rules and heuristics are very good at identifying structure within a problem. If the structure of the state space is simple, then a rule that removes a bunch of the state space is possible and effective. But when the state space becomes convoluted, the symmetry disappears, and simple rules can no longer give much advantage. The random portion of the chess space has lots of structure that can be easily exploited. But the competitive subset is a lot more fractal-like. The higher-level structure evaporates quickly, and details dominate. Here, simple heuristics will give close to 0 value, because violating a heuristic is often the key to making a strong move.
Another good problem to consider is Sudoku. Easy problems have lots of clues and can be solved quickly using the simplest strategies. The hardest problems resist the simplest strategies because they lack the structure which makes it easy to advance. They reduce to a kind of linear search through the possible solutions (players call this "brute force search" and it is very taboo...it's considered an admission of defeat). This is the common theme of NP-complete problems: the hardest instances have numerous lookalike paths, only one of which leads to the optimal solution. There are no easy shortcuts to examining them one by one. At least, nobody has found these shortcuts. You could say that finding such a shortcut, at least a universal one, is equivalent to proving that P = NP.