Preface

The work on this book started in October 2017. At the beginning of 2018, I was asked by IntechOpen to become a new editor.

Since I had already served as an editor of the book *Applications from Engineering with MAT‐ LAB Concepts* in 2016, I accepted the offer. The general aim of this book is to present selected optimization algorithms in the areas of engineering, sciences and economics. During the se‐ lection process, many contributions were received, but some had to be rejected. These were mostly too theoretical and missing explanations of the algorithms. My intention was to pro‐ vide a book with a focus on: (1) clear motivation of studied problems, (2) understanding of described algorithms by a broad spectrum of scientists, and (3) providing (computational) examples that a reader can easily repeat. At the final stage, only seven independent chapters remained. I hope our book entitled *Optimization Algorithms - Examples* will serve as a useful reference to students, scientists or engineers.

I am thankful to each author for the technical effort presented in each book chapter and their patience when working on revisions.

My biggest thanks go to Ms. Ivana Glavic, Author Service Manager from IntechOpen, who instructed me through many stages of the editorial process. Together, we did our best to ensure the book's high quality.

> **Dr. Jan Valdman** Institute of Mathematics and Biomathematics University of South Bohemia České Budějovice, Czech Republic

Institute of Information Theory and Automation of the ASCR Prague, Czech Republic

**Chapter 1**

Provisional chapter

**Distributionally Robust Optimization**

Distributionally Robust Optimization

Jian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michalis Smyrnakis and

Jian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michalis Smyrnakis and

Additional information is available at the end of the chapter

regret estimate. Illustrative examples are provided.

This chapter presents a class of distributionally robust optimization problems in which a decision-maker has to choose an action in an uncertain environment. The decision-maker has a continuous action space and aims to learn her optimal strategy. The true distribution of the uncertainty is unknown to the decision-maker. This chapter provides alternative ways to select a distribution based on empirical observations of the decision-maker. This leads to a distributionally robust optimization problem. Simple algorithms, whose dynamics are inspired from the gradient flows, are proposed to find local optima. The method is extended to a class of optimization problems with orthogonal constraints and coupled constraints over the simplex set and polytopes. The designed dynamics do not use the projection operator and are able to satisfy both upper- and lower-bound constraints. The convergence rate of the algorithm to generalized evolutionarily stable strategy is derived using a mean

DOI: 10.5772/intechopen.76686

Keywords: distribution robustness, gradient flow, Bregman divergence, Wasserstein

Robust optimization can be defined as the process of determining the best or most effective result, utilizing a quantitative measurement system under worst case uncertain functions or parameters. The optimization may occur in terms of best robust design, net cash flows, profits, costs, benefit/cost ratio, quality-of-experience, satisfaction, end-to-end delay, completion time, etc. Other measurement units may be used, such as units of production or production time, and optimization may occur in terms of maximizing production units, minimizing processing time,

> © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.76686

Hamidou Tembine

Hamidou Tembine

Abstract

metric, f-divergence

1. Introduction

#### **Distributionally Robust Optimization** Distributionally Robust Optimization

Jian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michalis Smyrnakis and Hamidou Tembine Jian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michalis Smyrnakis and Hamidou Tembine

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.76686

#### Abstract

This chapter presents a class of distributionally robust optimization problems in which a decision-maker has to choose an action in an uncertain environment. The decision-maker has a continuous action space and aims to learn her optimal strategy. The true distribution of the uncertainty is unknown to the decision-maker. This chapter provides alternative ways to select a distribution based on empirical observations of the decision-maker. This leads to a distributionally robust optimization problem. Simple algorithms, whose dynamics are inspired from the gradient flows, are proposed to find local optima. The method is extended to a class of optimization problems with orthogonal constraints and coupled constraints over the simplex set and polytopes. The designed dynamics do not use the projection operator and are able to satisfy both upper- and lower-bound constraints. The convergence rate of the algorithm to generalized evolutionarily stable strategy is derived using a mean regret estimate. Illustrative examples are provided.

DOI: 10.5772/intechopen.76686

Keywords: distribution robustness, gradient flow, Bregman divergence, Wasserstein metric, f-divergence
