When decisions shape uncertainty: Illinois researchers advance a new frontier in optimization

5/4/2026

Written by

Much of classical optimization research treats uncertainty as external to the decision-making process. Two new studies by Department of Industrial and Enterprise Systems Engineering researchers Grani A. Hanasusanto and Roy Dong develop distributionally robust optimization frameworks for settings in which decisions can shape future outcomes or determine which information becomes available later, even when the relevant distributions are not known precisely.

Associate Professor, Grani Hanasusanto
Associate Professor, Grani Hanasusanto

Choosing what to know

In many real-world settings, information doesn't arrive on its own — someone has to retrieve it. A company deciding which R&D projects to fund, an engineer choosing where to place sensors, a hiring committee advancing candidates through rounds of evaluation: in each case, the decision itself determines what gets learned.

One study builds a framework around this idea, formally called "decision-dependent information discovery." Rather than assuming data arrives passively, it lets decision-makers weigh the cost of acquiring information against its potential value, choosing not only what to do, but what is worth knowing. The framework does this under distributional robustness: the distribution of the uncertain parameters is itself not fully known, which is often the reality in settings where data can only be collected after an investment is made. The paper's main algorithmic contribution is a K-adaptability approximation scheme, which selects a set of candidate decisions ahead of time and implements the best one once the observed information is revealed. Applied to R&D project portfolio optimization and the best box problem, the approach consistently outperforms purely robust methods that ignore distributional information by up to 51% in some cases. The paper, "Distributionally Robust Optimization with Decision-Dependent Information Discovery," was published in Mathematical Programming in April 2026, and is co-authored by Hanasusanto, Qing Jin, Angelos Georghiou and Phebe Vayanos.

Accounting for how decisions reshape the system

Assistant Professor, Roy Dong

The second study, co-authored by Hanasusanto and Dong, takes on a related but distinct problem: what happens when your decisions change the very system you're trying to optimize?

This is more common than it might seem. Airlines set prices, and passengers adjust their behavior. Institutional investors make trades, and markets move. Machine learning models issue predictions, and people respond to them. In each case, the feedback loop between decision and environment means the data you used to make a decision may no longer describe the world after you've made it.

What makes this especially difficult is that decision-makers rarely know exactly how the environment will respond — they may have a reference model, but the true response could differ from it in ways that are hard to anticipate. The team's framework addresses this directly, building in robustness to uncertainty in the decision-environment relationship itself. Their approach is built around what they call a Repeated Robust Risk Minimization algorithm, which optimizes against a range of plausible scenarios rather than a single predicted outcome, updating iteratively as the picture shifts. Tested across three applications — strategic classification, revenue management and demand response portfolio optimization — the framework consistently outperforms both non-robust baselines and prior distributionally robust approaches. The paper, "Distributionally Robust Performative Optimization," was presented at NeurIPS 2025, and is co-authored by Dong, Hanasusanto, Zhuangzhuang Jia, and Yijie Wang.

A more realistic picture of uncertainty

"What these models offer is a way to make decisions that remain reliable even when the underlying uncertainty is not fully known and available data are limited," said Hanasusanto. "In credit scoring, for example, obtaining labeled data may require expensive evaluations, expert judgment, or long observation periods. Our results demonstrate that robust approaches consistently outperform non-robust baselines in settings like these."

The broader implication is a shift in how optimization problems get framed — away from the assumption that uncertainty is something fixed to be managed, and toward a more honest account of how decisions and environments shape each other.

 

 

Related Links


Share this story

This story was published May 4, 2026.