3

I am currently using Column Generation accompanied with Dantzig-Wolfe decomposition to solve MILP. I have several questions to ask.

  1. At the beginning, the objective value of RMP does not improve even if columns with negative RC (my problem is a minimisation problem) have been found and added into the basis. It starts to improve only after numerous rounds of iteration. I know this is the so-called degeneracy of CG. According to the suggestions in some related literature, I tried to add some noise to the RHS of RMP constraints. For example, a "= 1" constraint is relaxed to "≥ 0.9999 & ≤ 1.0001". It seems that the objective value improves iteratively after this relaxation; however, when I check the total number of iteration rounds required to optimise the RMP, I found that it did not change much compared to the case without relaxation.
  2. As iteration goes on, more columns are added into the basis and the solution process of RMP becomes slower. However if I limit the number of columns by adding a mechanism, which removes those columns with smallest coefficients in the RMP solution from the basis, the algorithm often gets stuck with cycling. What can I do on this?
  3. Most of times the optimisation of RMP ends up with a non-integer solution. The best integer solution did not change throughout the CG process (that is, the initial integer solution generated by heuristic is never improved with new columns found). Is there anything I can do other than branch-and-price and doing a binary-tree search? I would like to quickly obtain an improved integer solution.
  • For part 3, it can be helpful to solve the RMP with integrality enforced every $f$ DW iterations, where $f$ is some specified frequency. Doing so this only once at the end of the root node is called the price-and-branch heuristic. – RobPratt Mar 23 '23 at 23:59

2 Answers2

2

If you want to keep your method exact, your options are limited. However, if you don't mind having a suboptimal solution, there are a few tricks you can try.

If you want an exact method :

  1. I don't think there is an easy way to fix this.
  • Sometimes, adding valid inequalities or symetry-breaking constraints help, but this is vert problem-dependent.
  • If possible, you could build an initial solution and add the corresponding columns to warmstart the column generation algorith. This chould help stabilize the dual variables.
  • Remember that in column generation, modeling can have a huge impact and is not necessarily straigthforward.
  1. Yes, that is something that can happen. To limit the size of the RMP, maybe you can filter the columns that are added to the RMP. For instance, you could only add columns that are column-disjoint among themselves (two columns are column-disjoint if they don't share tasks). Alternatively, you could only add the n columns with the lowest reduced cost.
  2. There are pretty much 2 options : branch-and-price and branch-and-cut. In branch-and-cut, you add valid inequalities after solving each RMP until you get an integer solution. There is also branch-price-and-cut that combines both approaches.

If you don't necessarily want an exact method

  1. You can have a heuristic stopping criterion such as :
  • Stop if the objective has not decreased by x% in the last y iterations
  • Stop after y iterations
  • etc.
  1. No additional tricks, I am afraid...
  2. What you can do is a restricted master heuristic. Solve the RMP and then build a MILP with the generated columns. Solve that MILP using your favorite solver et voila! The danger of doing this is that the resulting MILP may be infeasible. It depends on the application.
fredq
  • 970
1

You might want to try the branch-and-price (also known as branch-cut-price) method. I'm not sure which if any ILP solvers directly support branch-and-price, but there are "frameworks" out there (such as Symphony and BCP) that let you plug in an ILP solver and apply B&P.

prubin
  • 5,418