Background
You and I are going to play a game. To start off with I play a measurable function $f_1$ and you respond with a real number $y_1$ (possibly infinite). We repeat this some fixed number $N$ of times, to obtain a collection $\{(f_i,y_i)\}_{i=1}^N$. Define
$\Gamma=\{\mu: \mu $ is a probability measure, and $\int f_i(x)d\mu(x)=y_i \forall i\}$
(We'll assume that $\Gamma$ is not empty, meaning you cannot give inconsistent answers. For example, if I play $f_1(x)=x^2$, then you could not play $y_1=-1$).
Next, I play a measure $P$ which must satisfy $P\in \Gamma$. You then play a measure $Q\in \Gamma$. The game is scored such that your reward is equal to $H(Q|P)$ (the cross entropy) while my reward is equal to $-H(Q|P)$. In other words, I am rewarded to the extent that I can guess your distribution, while you are rewarded to the extent that you can foil my guess.
What should my strategy be for playing this game?
Motivation
This game can be considered as a general model of scientist designing experiments (the functions $f_i$) and using the results of those experiments to develop a theory (the measure $P$). The other player is a stand-in for "Nature", which we assume acts against us at every turn.
Further, the game is a generalization of the setting considered by Grunwald and Dawid in this paper. They consider what amounts to a special case of this game in which the set $\Gamma$ is assumed to be specified ahead of time (so each participant only specifies their distribution). Very interestingly, they show that in this case the optimal strategy is to play the distribution $\text{arg max}_{P\in\Gamma} H(P)$, where $H$ denotes Shannon entropy. This amounts to a passive model of scientific inference, in which the scientist has access only to observational data in the forms of constraints $\int f_i(x)d\mu(x)=y_i$, but has no control over what the actual functions $f_i$ are. Thus I am interested in what would happen if their setting is extended to an "active" one, in which the scientist can control which statistics $f_i$ to measure in the first place.