Say I have the following optimization problem:
$$ \begin{align} \textrm{minimize } & \sum_{p\in P}{c_p \lambda_p} \\ \textrm{s.t. } & \lambda_{p_1} + \lambda_{p_2} \leq 1, \forall p_1,p_2 \in P \\ & \lambda_p \in \{0,1\}, \forall p \in P \end{align} $$
where $P$ is very large. The problem is silly, and could be reformulated to avoid the quadratic number of constraints, but this is just a simplification of my actual problem, so please ignore that. For example assume that the constraint holds not for all pairs of $p$ but only for a subset of them.
The reduced cost expression would be something like:
$$ \begin{align} \textrm{reduced_cost}(p)&=c_p-\sum_{p_2 \in P}{\pi_{p,p2}} \end{align} $$
where $\pi_{p_1,p_2}$ is the dual variable corresponding to the constraint defined for each $p_1, p_2 \in P$.
Assuming the above is correct, then the pricing problem involves coming up with a new $p \in P$ such that the above expression is negative.
My question is about how can we know what $\pi_{p,p_2}$ is for a given $p$ if, at the time we are solving the pricing problem, the constraint $\lambda_{p} + \lambda_{p_2} \leq 1$ has not yet been added to the restricted master problem? Typically one can obtain $\pi_{p,p_2}$ from the restricted master problem (dual) simplex, but I suppose in cases like this it is more complex, isn't it?
To solve this I was reasoning along these lines: $\pi_{p,p_2}$ measures by how much the restricted master problem current objective increases if the right side of my constraints increase from 1 to 2. So this value would be
$$ \pi_{p,p_2}=c_p \left[2 - \lambda_{p_2} - (1 - \lambda_{p_2})\right]=c_p $$
On the other hand, I am not sure this is that simple. For example, if $\lambda_{p_2}=0$ in the restricted master problem, then increasing the right hand side from 1 to 2 would not improve the objective since $\lambda_{p}$ can be at most 1. Following this reasoning, we would have that
$$ \pi_{p,p_2}=c_p \left[\min(1, 2-\lambda_{p_2}) - (1 - \lambda_{p_2})\right] = c_p \lambda_{p_2} $$