I will present my research at the 2024 European conference on Operational Reseachs !
I will make an oral presentation of my work at the EURO 2024 conference ! Paper and proceedings are NDA protected so I’ll try to make it fully comprehensive with minimal material, have a great read !
Oral Spotlight details:
- When: Monday, 8:30-10:00
- Where: Room 98 (building 306)
- Session: Quantum Computing Optimization
🚨 Program link: You can find the full program entry here.
🚨 Note: on the program website, first select “EURO 2024” in the conference selector at the top, then the link will work properly
A simple example: assigning tasks to workers
Imagine you need to assign three tasks to three workers, with the rule that each worker gets exactly one task (this is the constraint). Your goal is to minimize the total cost of the assignments based on how well each worker matches each task (this is the objective).
A classical computer would try all possible assignments and pick the best one. A quantum computer can explore many assignments simultaneously using quantum superposition, potentially finding good solutions faster. However, we need to ensure the quantum algorithm only considers valid assignments (where each worker gets exactly one task) and finds the lowest-cost valid assignment.
Abstract
We present an optimal assignment problem executed on IBM quantum computers using the Quantum Approximate Optimization Algorithm (QAOA) (Farhi et al., 2014). We compare two strategies for ensuring the quantum algorithm respects the “one task per worker” constraint:
(i) Penalty approach: allow invalid assignments but add a large penalty to their cost, so the algorithm naturally avoids them.
(ii) Constraint-preserving approach: design the quantum operations so they can only produce valid assignments, never invalid ones.
The penalty approach is simpler to implement but requires careful tuning and makes the quantum circuit deeper (more operations). The constraint-preserving approach guarantees valid solutions but can also increase circuit depth in a different way. On small problem instances that fit current quantum hardware, we compare how deep the circuits need to be and how well each method finds good solutions.
How QAOA works
The Quantum Approximate Optimization Algorithm (QAOA) works by preparing a quantum state that encodes potential solutions, then adjusting parameters to make the state favor better solutions. The algorithm alternates between two types of operations:
- Cost operations: these make the quantum state favor solutions with lower cost (better assignments in our example).
- Mixing operations: these explore different solutions by creating quantum superpositions.
The depth‑$p$ QAOA applies $p$ layers of these alternating operations: \(|\psi(\boldsymbol{\beta},\boldsymbol{\gamma})\rangle = \prod_{l=1}^{p} e^{-i \beta_l H_M}\, e^{-i \gamma_l H_C}\, |+\rangle^{\otimes n}, \quad C(\boldsymbol{\beta},\boldsymbol{\gamma})=\langle \psi|H_C|\psi\rangle,\) where $H_C$ encodes the cost (objective plus any penalties) and $H_M$ is the mixer. The parameters ${\boldsymbol{\beta},\boldsymbol{\gamma}}$ are optimized to minimize the expected cost $C(\boldsymbol{\beta},\boldsymbol{\gamma})$.
Example of Similar approaches (not ours)
Penalty approach: we modify the cost function to include penalties for constraint violations: \(H_C \;=\; H_{\text{obj}} \;+\; \sum_{k} \lambda_k\, h_k,\) where $h_k$ measure how much a solution violates the constraints and $\lambda_k>0$ are penalty weights. Think of it like adding a large fine for invalid assignments—if $\lambda_k$ is large enough, invalid solutions become so expensive that the algorithm avoids them. However, these extra penalty terms make the quantum circuit deeper (more operations), and choosing the right penalty weights requires careful tuning.
Constraint‑preserving mixers: instead of penalizing invalid solutions, we design the mixer $H_M$ so it can only transform valid solutions into other valid solutions. For assignment problems where each worker must get exactly one task, you can use an XY‑style mixer that only swaps tasks between workers: \(H_M \;=\; \sum_{(i,j)\in\mathcal{E}} \big(X_i X_j + Y_i Y_j\big),\) This guarantees we never leave the space of valid assignments, but the mixer operations themselves can be more complex, potentially increasing circuit depth in a different way.
Optimizing circuit depth after
Many cost functions can be written as sums of simpler terms that involve only a few variables at a time, for example: \(H_C \;=\; \sum_{i<j} w_{ij}\, Z_i Z_j \;+\; \sum_i b_i\, Z_i,\) When operations on different parts of the problem commute (can be done in any order), we can schedule them in parallel layers, reducing the total circuit depth while achieving the same result. Layers typically have an exponential growth, this step is crucial
General takeaways in quantum computing
-
Constraint handling: the penalty approach is conceptually simple but makes circuits deeper and requires tuning penalty weights. The constraint-preserving mixer approach naturally avoids invalid solutions but may also increase circuit depth through more complex mixer operations.
-
Depth vs solution quality: we compare how deep the quantum circuits need to be versus how well they concentrate probability on good, valid solutions. On current noisy quantum hardware, shorter circuits are better, but we also need the algorithm to find good solutions reliably.
-
Practical considerations: for small problems that fit current quantum hardware, constraint-preserving mixers can outperform penalties when it’s difficult to tune the penalty weights correctly. However, each mixer layer may itself be deeper, so the trade-off depends on the specific problem structure.