E. Şahin, M. Çakmak, M. R. Dog ̆ar, E. Ug ̆ur, and G. Üçoluk, “To afford or not to afford:
A new formalization of affordances toward affordance-based robot control,” Adaptive
Behavior, vol. 15, no. 4, 2007.
A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis, “Object-based affordances
detection with convolutional neural networks and dense conditional random fields,” in 2017
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
M. A. Zamani, S. Magg, C. Weber, S. Wermter, and D. Fu, “Deep reinforcement learning using
compositional representations for performing instructions,” Paladyn, Journal of Behavioral
Robotics, vol. 9, no. 1, 2018.
R. Bhattacharyya and S. M. Hazarika, “Object affordance driven inverse reinforcement
learning through conceptual abstraction and advice,” Paladyn, Journal of Behavioral
Robotics, vol. 9, no. 1, 2018.
H. Wu, D. Misra, and G. S. Chirikjian, “Is that a chair? imagining affordances using
simulations of an articulated human body,” in 2020 IEEE International Conference on
Robotics and Automation (ICRA), IEEE, 2020, pp. 7240–7246.
V. Kumar and E. Todorov, “Mujoco haptix: A virtual reality system for hand manipulation,”
in 2015 IEEE-RAS 15th International Conference on Humanoid Robots
(Humanoids), 2015, pp. 657–663.
F. Ostiategui, A. Amundarain, A. Lozano Rodero, and L. Matey, “Gardening work simulation
tool in virtual reality for disabled people tutorial,” Proceedings of Integrated Design
and Manufacturing-Virtual Concept (IDMME’10), 2010.
A. Raikwar, N. D’Souza, C. Rogers, et al., “Cubevr: Digital affordances for architecture
undergraduate education using
virtual reality,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR),
2019, pp. 1623–1626.
N. Vaughan and B. Gabrys, “Comparing and combining time series trajectories using
dynamic time warping,” Procedia Computer Science, vol. 96, 2016.
doi.org/10.1016/j.procs.2016.08.106
S. Munikoti, D. Agarwal, L. Das, M. Halappanavar, and B. Natarajan, Challenges and
opportunities in deep reinforcement learning with graph neural networks: A comprehensive
review of algorithms and applications, 2022.
Y.-C. Liao, K. Todi, A. Acharya, A. Keurulainen, A. Howes, and A. Oulasvirta,
“Rediscovering affordance: A reinforcement learning perspective,” in CHI Conference on Human
Factors in Computing Systems, Apr. 2022.
R. Traoré, H. Caselles-Dupré, T. Lesort, T. Sun, N. Díaz-Rodríguez, and D. Filliat,
Continual reinforcement learning deployed in real-life using policy distillation and
sim2real transfer, 2019.
T. C. Stephen Adams and P. A. Beling, “A survey of inverse reinforcement learning,”
Artificial Intelligence Review, vol. 55,
2022.
Abdalla, Rifaat, and Vincent Tao. "Integrated distributed GIS approach for earthquake
disaster modeling and visualization." Geo-information for disaster management. Springer,
Berlin, Heidelberg, 2005. 1183-1192.
T. Erez, Y. Tassa and E. Todorov, “Simulation tools for model-based robotics: Comparison of
Bullet, Havok, MuJoCo, ODE and PhysX” 2015 IEEE International Conference on Robotics and
Automation (ICRA), 2015, pp. 4397-4404, doi: 10.1109/ICRA.2015.7139807
B. Wirtz, “Havok Game Engine: Learn the Basics and Physics of Creating Real-Looking
Characters.” Video Game Design and Development, 13 Oct. 2022,
www.gamedesigning.org/engines/havok
C. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, O. Bachem. “Brax - A Differentiable
Physics Engine for Large Scale Rigid Body Simulation.” 2021, doi: 10.48550/arXiv.2106.13281
S. Gillen, K. Byl, “Leveraging Reward Gradients For Reinforcement Learning in Differentiable
Physics Simulations.” 2022, doi: 10.48550/arXiv.2203.02857
Körber, Marian, et al. “Comparing Popular Simulation Environments in the Scope of Robotics
and Reinforcement Learning.” ArXiv.org, 8 Mar. 2021, https://arxiv.org/abs/2103.04616v1
arxiv.org/abs/2103.04616v1
E. Todorov, T. Erez and Y. Tassa, "MuJoCo: A physics engine for model-based control," 2012
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026-5033,
doi: 10.1109/IROS.2012.6386109
Choi, KS., Chan, SH. & Pang, WM. Virtual Suturing Simulation Based on Commodity Physics
Engine for Medical Learning. J Med Syst 36, 1781–1793 (2012)
doi.org/10.1007/s10916-010-9638-1
K. Khetarpal, Z. Ahmed, G. Comanici, D. Abel, and D. Precup, "What can I do here? a
theory of affordances in reinforcement learning, 2020"
Aditya Raikwar; Newton D'Souza; Ciana Rogers; Mathew Kress; Adam Williams, "CubeVR:
Digital Affordances for Architecture Undergraduate Education using Virtual Reality"
Fabio Pardo, Arash Tavakoli, Vitaly Levdik, Petar Kormushev, "Time Limits in
Reinforcement Learning"
Title: Challenges and Opportunities in Deep Reinforcement Learning with Graph Neural
Networks: A Comprehensive Review of Algorithms and Applications
arxiv.org/pdf/2206.07922.pdf
Reinforcement Learning for Object Affordance Detection" by X. Liu, Y. Chen, and X. Liu
(2015)
Reinforcement Learning for Object Affordance Detection: A Critical Review by X. Liu, Y.
Chen, and X. Liu (2017)
Challenges and Opportunities in Deep Reinforcement Learning for Robotics" by J. Kober
and J. Peters (2013)
A Survey on Deep Reinforcement Learning: Classic and Contemporary Approaches" by X. Liu,
Y. Chen, and X. Liu (2019)
Hierarchical Reinforcement Learning for Affordance-Based Robots" by Katja Hofmann, Marc
Toussaint, and J. Andrew Bagnell
"Challenges and limitations in reinforcement learning for robotics control" by Ahmed A.
A. Elgammal and Mohamed S. Kamel (2018)
"Affordance-Based Reinforcement Learning for Kitchen Robots" by Wei Liu, Weihang Yuan,
Jie Tan, and Shiguo Wu (2017)
"Reinforcement Learning in Robotics: A Survey" by Marina Meila, Doina Precup, and Y.
Shawn Yang (2015)
"Reinforcement Learning for Robotic Affordance Detection" by Shunsuke Saito, Kazuhito
Yamamoto, Seiichi Uchida (IEEE Robotics and Automation Letters, 2020)
"Reinforcement Learning for Kitchen Affordance Recognition" by T. Anand, N. Jain, M.
Singh (International Journal of Advanced Robotics Systems, 2020)