So I was wondering in tic tac toe in infinite dimensions where instead of a 2D grid I play on an infinite dimensional hypercube. I've come to the conclusion the term 'optimal strategy' does not make sense (?).
How? - Imagine I score a win using an optimal strategy (assuming it exists). Then using the projection operator I can project part of the game on a 2 dimensional grid and work backwards, then add a cross product of another 2d grid and so on...
What I mean by 'optimal strategy' is one cannot even make statements like corners are the optimal positions. Consider the counterexample in the simplest 2d case (normal tic-tac-toe with chess notation).
X: B1 O: A3 X: C3
Now O is forced to move A1 (anything else loses)...
So, now one may add forced moves + corners are a good thing strategically...
My question is:
How does one prove or disprove an optimal strategy exists/does not exist? How does one define complexity of the optimal strategy?
(Assuming that's a sensible question)