Title: GRADE: Generating Realistic Animated Dynamic Environments for Robotics Research
Authors: Elia Bonetto, Chenghao Xu, and Aamir Ahmad
Abstract: Simulation engines like Gazebo, Unity and Webots are widely adopted in robotics. However, they lack either full simulation control, ROS integration, realistic physics, or photorealism. Recently, synthetic data generation with realistic rendering has advanced tasks like target tracking and human pose estimation. However, while focusing on vision applications, there is usually a lack of information like sensor measurements (e.g. IMU, LiDAR, joint state), or time continuity. On the other hand, simulations for most robotics applications are obtained in (semi)static environments, with specific sensor settings and low visual fidelity. In this work, we present a solution to these issues with a fully customizable framework for generating realistic animated dynamic environments (GRADE) for robotics research. The data produced can be post-processed, e.g. to add noise, and easily expanded with new data using the tools that we provide. To demonstrate GRADE, we use it to produce an indoor dynamic environment dataset and compare different SLAM algorithms on various sequences. By doing that, we show how current research over-relies on well-known benchmarks and fails to generalize. Furthermore, our tests with YOLO and Mask R-CNN show that our data can improve training performance. Finally, we show GRADE's flexibility using indoor active SLAM, diverse environment sources, and a multi-robot scenario. In doing so, we employ different control, asset placement, and simulation techniques. The code, results, implementation details, and generated data are provided as open-source for the benefit of the community. The main project page is [ Ссылка ]
voice generated with [ Ссылка ]
Ещё видео!