Reference List

E. Şahin, M. Çakmak, M. R. Dog ̆ar, E. Ug ̆ur, and G. Üçoluk, “To afford or not to afford: A new formalization of affordances toward affordance-based robot control,” Adaptive Behavior, vol. 15, no. 4, 2007.

A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis, “Object-based affordances detection with convolutional neural networks and dense conditional random fields,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.

M. A. Zamani, S. Magg, C. Weber, S. Wermter, and D. Fu, “Deep reinforcement learning using compositional representations for performing instructions,” Paladyn, Journal of Behavioral Robotics, vol. 9, no. 1, 2018.

R. Bhattacharyya and S. M. Hazarika, “Object affordance driven inverse reinforcement learning through conceptual abstraction and advice,” Paladyn, Journal of Behavioral Robotics, vol. 9, no. 1, 2018.

H. Wu, D. Misra, and G. S. Chirikjian, “Is that a chair? imagining affordances using simulations of an articulated human body,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2020, pp. 7240–7246.

V. Kumar and E. Todorov, “Mujoco haptix: A virtual reality system for hand manipulation,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015, pp. 657–663.

F. Busch, C. Heathcote, R. Jakob, B. Tegetmeier, and A. Vakili, “Vr-based human-robot-affordance transfer,” 2022. [Online]. Available: cgvr.cs.uni-bremen.de/teaching/studentprojects/vrrat/

F. Ostiategui, A. Amundarain, A. Lozano Rodero, and L. Matey, “Gardening work simulation tool in virtual reality for disabled people tutorial,” Proceedings of Integrated Design and Manufacturing-Virtual Concept (IDMME’10), 2010.

A. Raikwar, N. D’Souza, C. Rogers, et al., “Cubevr: Digital affordances for architecture undergraduate education using virtual reality,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2019, pp. 1623–1626.

N. Vaughan and B. Gabrys, “Comparing and combining time series trajectories using dynamic time warping,” Procedia Computer Science, vol. 96, 2016. doi.org/10.1016/j.procs.2016.08.106

S. Munikoti, D. Agarwal, L. Das, M. Halappanavar, and B. Natarajan, Challenges and opportunities in deep reinforcement learning with graph neural networks: A comprehensive review of algorithms and applications, 2022.

Y.-C. Liao, K. Todi, A. Acharya, A. Keurulainen, A. Howes, and A. Oulasvirta, “Rediscovering affordance: A reinforcement learning perspective,” in CHI Conference on Human Factors in Computing Systems, Apr. 2022.

R. Traoré, H. Caselles-Dupré, T. Lesort, T. Sun, N. Díaz-Rodríguez, and D. Filliat, Continual reinforcement learning deployed in real-life using policy distillation and sim2real transfer, 2019.

T. C. Stephen Adams and P. A. Beling, “A survey of inverse reinforcement learning,” Artificial Intelligence Review, vol. 55, 2022.

Abdalla, Rifaat, and Vincent Tao. "Integrated distributed GIS approach for earthquake disaster modeling and visualization." Geo-information for disaster management. Springer, Berlin, Heidelberg, 2005. 1183-1192.

T. Erez, Y. Tassa and E. Todorov, “Simulation tools for model-based robotics: Comparison of Bullet, Havok, MuJoCo, ODE and PhysX” 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 4397-4404, doi: 10.1109/ICRA.2015.7139807

B. Wirtz, “Havok Game Engine: Learn the Basics and Physics of Creating Real-Looking Characters.” Video Game Design and Development, 13 Oct. 2022, www.gamedesigning.org/engines/havok

“Havok Physics.” Havok, www.havok.com/havok-physics

C. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, O. Bachem. “Brax - A Differentiable Physics Engine for Large Scale Rigid Body Simulation.” 2021, doi: 10.48550/arXiv.2106.13281

Speeding up Reinforcement Learning With a New Physics Simulation Engine. 15 July 2021, ai.googleblog.com/2021/07/speeding-up-reinforcement-learning-with

S. Gillen, K. Byl, “Leveraging Reward Gradients For Reinforcement Learning in Differentiable Physics Simulations.” 2022, doi: 10.48550/arXiv.2203.02857

Evaluation of open dynamics engine software - techunited.nl. (n.d.). Retrieved November 7, 2022, www.techunited.nl/media/files/humanoid/RichardKooijman_INT2010_Evaluation_Open_Dynamics_Engine.pdf

Körber, Marian, et al. “Comparing Popular Simulation Environments in the Scope of Robotics and Reinforcement Learning.” ArXiv.org, 8 Mar. 2021, https://arxiv.org/abs/2103.04616v1 arxiv.org/abs/2103.04616v1

“NVIDIA PhysX: Libraries and Latest Releases.” NVIDIA Developer, 7 Nov. 2022, developer.nvidia.com/physx-sdk developer.nvidia.com/physx-sdk

NVIDIA PhysX SDK 4.1 Documentation — NVIDIA PhysX SDK 4.1 Documentation. gameworksdocs.nvidia.com/PhysX/4.1/documentation/physxguide/Index.html

“Physics.” Unreal Engine 4.27 Documentation, docs.unrealengine.com/4.27/en-US/InteractiveExperiences/Physics

E. Todorov, T. Erez and Y. Tassa, "MuJoCo: A physics engine for model-based control," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026-5033, doi: 10.1109/IROS.2012.6386109

Choi, KS., Chan, SH. & Pang, WM. Virtual Suturing Simulation Based on Commodity Physics Engine for Medical Learning. J Med Syst 36, 1781–1793 (2012) doi.org/10.1007/s10916-010-9638-1

Chaos physics overview. Unreal Engine Documentation. (n.d.). Retrieved November 8, 2022, docs.unrealengine.com/4.26/en-US/InteractiveExperiences/Physics/ChaosPhysics/Overview/

Google. (n.d.). Pybullet quickstart guide. Google Docs. Retrieved November 9, 2022, docs.google.com/document/d/10sXEhzFRSnvFcl3XxNGhnD4N2SedqwdAvK3dsihxVUA/

A Survey on Deep Reinforcement Learning-based Approaches for Adaptation and Generalization arxiv.org/ftp/arxiv/papers/2202/2202.08444.pdf

Time Limits in Reinforcement Learning proceedings.mlr.press/v80/pardo18a.html

Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey ieeexplore.ieee.org/abstract/document/9308468

K. Khetarpal, Z. Ahmed, G. Comanici, D. Abel, and D. Precup, "What can I do here? a theory of affordances in reinforcement learning, 2020"

A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, and P. Abbeel, Overcoming exploration in reinforcement learning with demonstrations, 2017. ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8463162

Philipp Zech1, Simon Haller1, Safoura Rezapour Lakani1, Barry Ridge2, Emre Ugur3, and Justus Piater, "Computational models of affordance in robotics: a taxonomy and systematic classification" journals.sagepub.com/doi/pdf/10.1177/1059712317726357

Aditya Raikwar; Newton D'Souza; Ciana Rogers; Mathew Kress; Adam Williams, "CubeVR: Digital Affordances for Architecture Undergraduate Education using Virtual Reality"

Fabio Pardo, Arash Tavakoli, Vitaly Levdik, Petar Kormushev, "Time Limits in Reinforcement Learning"

Title: Overcoming Exploration in Reinforcement Learning with Demonstrations ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8463162

Title: Challenges and Opportunities in Deep Reinforcement Learning with Graph Neural Networks: A Comprehensive Review of Algorithms and Applications arxiv.org/pdf/2206.07922.pdf

Title: Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion proceedings.neurips.cc/paper/2018/file/f02208a057804ee16ac72ff4d3cec53b-Paper.pdf

Reinforcement Learning for Object Affordance Detection" by X. Liu, Y. Chen, and X. Liu (2015)

Reinforcement Learning for Object Affordance Detection: A Critical Review by X. Liu, Y. Chen, and X. Liu (2017)

Challenges and Opportunities in Deep Reinforcement Learning for Robotics" by J. Kober and J. Peters (2013)

A Survey on Deep Reinforcement Learning: Classic and Contemporary Approaches" by X. Liu, Y. Chen, and X. Liu (2019)

Hierarchical Reinforcement Learning for Affordance-Based Robots" by Katja Hofmann, Marc Toussaint, and J. Andrew Bagnell

"Challenges and limitations in reinforcement learning for robotics control" by Ahmed A. A. Elgammal and Mohamed S. Kamel (2018)

"Affordance-Based Reinforcement Learning for Kitchen Robots" by Wei Liu, Weihang Yuan, Jie Tan, and Shiguo Wu (2017)

"Reinforcement Learning in Robotics: A Survey" by Marina Meila, Doina Precup, and Y. Shawn Yang (2015)

"Reinforcement Learning for Robotic Affordance Detection" by Shunsuke Saito, Kazuhito Yamamoto, Seiichi Uchida (IEEE Robotics and Automation Letters, 2020)

"Reinforcement Learning for Kitchen Affordance Recognition" by T. Anand, N. Jain, M. Singh (International Journal of Advanced Robotics Systems, 2020)