Reinforcement learning has two important notions and I am interested in advanced math that can investigate those notions:
- State space - set of states. Apparently, deep structures should exist in this space. E.g. (in the robotics applications) some states are within the allowed precision bounds and can be treated as equals, some states should not be reachable, there are bounds in the state space that demarcates the subsets of states and only those aggregated states (superstates) can be considered to be distinct, there are different such demarkations if one considers the different levels of granularity - at each level there can assigned some concepts to the states of sets of states at that level. What math can model such structures within state space and are there some work in this direction, maybe even with the applications in the Reinforcement Learning. I guess that Markov Decision Processes and Ergodic Theory consider such questions, but they don't use the notions of superstates, of aggregations, hierarchies of states and assignments of conepts;
- policies - which can effectively lead to the sequences of actions. And the similar structure emerges in the sequences of actions - there can be subsequences and supersequences of actions, so, some modularity and hierarchy emerges and some concepts can be assigned to theses sequences at the appropriate level of granularity. Again - maybe there are some kind of discrete homotopy theory that considers it (pure guess).
So - what kind of math can be applied for introducing hierarchies and modularities in the state spaces and in the sequences of actions?
Usually mathematical structures can be "mapped to"/"used for the modelling of" the concepts that reflect the real world. So - I am seeking exactly such mathematics for state spaces and action sequences. Topology maybe? But how?