Bibliography#

[1]

Farshid Azadian, Alper E. Murat, and Ratna Babu Chinnam. Dynamic routing of time-sensitive air cargo using real-time information. Transportation Research Part E: Logistics and Transportation Review, 48(1):355–372, 2012. URL: https://www.sciencedirect.com/science/article/pii/S1366554511000883.

[2]

Steven F. Baker, David P. Morton, Richard E. Rosenthal, and Laura Melody Williams. Optimizing military airlift. Operations Research, 50(4):582–602, 2002. URL: https://doi.org/10.1287/opre.50.4.582.2864.

[3]

Gerardo Berbeglia, Jean-François Cordeau, and Gilbert Laporte. Dynamic pickup and delivery problems. European Journal of Operational Research, 202(1):8–15, 2010. URL: https://www.sciencedirect.com/science/article/pii/S0377221709002999, doi:https://doi.org/10.1016/j.ejor.2009.04.024.

[4]

Dimitris Bertsimas, Allison Chang, Velibor V. Mišić, and Nishanth Mundru. The airlift planning problem. Transportation Science, 53(3):773–795, 2019. URL: https://doi.org/10.1287/trsc.2018.0847.

[5]

Gerald G. Brown, W. Matthew Carlyle, Robert F. Dell, and John W. Brau. Optimizing intratheater military airlift in iraq and afghanistan. 2013. Military Operations Research, 18(3), pp. 35-52. URL: http://hdl.handle.net/10945/38129.

[6]

Felipe Delgado and Julio Mora. A matheuristic approach to the air-cargo recovery problem under demand disruption. Journal of Air Transport Management, 90:101939, 2021. URL: https://www.sciencedirect.com/science/article/pii/S0969699720305226.

[7]

Bo Feng, Yanzhi Li, and Zuo-Jun Max Shen. Air cargo operations: literature review and comparison with practices. Transportation Research Part C: Emerging Technologies, 56:263–280, 2015. URL: https://www.sciencedirect.com/science/article/pii/S0968090X15001175.

[8]

Waldy Joe and Hoong Chuin Lau. Deep reinforcement learning approach to solve dynamic vehicle routing problem with stochastic customers. Proceedings of the International Conference on Automated Planning and Scheduling, 30(1):394–402, Jun. 2020. URL: https://ojs.aaai.org/index.php/ICAPS/article/view/6685.

[9]

Jingwen Li, Liang Xin, Zhiguang Cao, Andrew Lim, Wen Song, and Jie Zhang. Heterogeneous attentions for solving pickup and delivery problem via deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 23(3):2306–2315, 2022. URL: https://ieeexplore.ieee.org/document/9352489.

[10]

Ann Nowé, Peter Vrancx, and Yann-Michaël De Hauwere. Game Theory and Multi-agent Reinforcement Learning, pages 441–470. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. URL: https://doi.org/10.1007/978-3-642-27645-3_14.

[11]

Liviu Panait and Sean Luke. Cooperative multi-agent learning: the state of the art. Autonomous agents and multi-agent systems, 11(3):387–434, 2005.

[12]

Ken Perlin. An image synthesizer. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '85, 287–296. New York, NY, USA, 1985. Association for Computing Machinery. URL: https://doi.org/10.1145/325334.325247.

[13]

S. Prasanna and F. L. Mohanty. Neurips rl competitions: flatland challenge. https://slideslive.com/38940885/neurips-rl-competitions-flatland-challenge. Accessed: Feb 20, 2022.