A smart wheelchair improves the quality of life for older adults by supporting their mobility independence. Some critical maneuvering tasks, like table docking and doorway passage, can be challenging for older adults in wheelchairs, especially those with additional impairment of cognition, perception or fine motor skills. Supporting such functions in a shared manner with robot control seems to be an ideal solution. Considering this, we propose to augment smart wheelchair perception with the capability to identify potential docking locations in indoor scenes.
ApproachFinder-CV is a computer vision pipeline that detects safe docking poses and estimates their desirability weight based on hand-selected geometric relationships and visibility. Although robust, this pipeline is computationally intensive. We leverage this vision pipeline to generate ground truth labels used to train an end-to-end differentiable neural net that is 15x faster.
ApproachFinder-NN is a point-based method that draws motivation from Hough voting and uses deep point cloud features to vote for potential docking locations. Both approaches rely on just geometric information, making them invariant to image distortions. A large-scale indoor object detection dataset, SUN RGB-D, is used to design, train and evaluate the two pipelines.
Potential docking locations are encoded as a 3D temporal desirability cost map that can be integrated into any real-time path planner. As a proof of concept, we use a model predictive controller that consumes this 3D costmap with efficiently designed task-driven cost functions to share human intent. This wheelchair navigation controller outputs a nominal path that is safe, goal-oriented and jerk-free for wheelchair navigation.
We used a MPC based controller that works by interleaving optimization and execution. This is a sampling based, derivative free approach which was applied to aggressive autonomous driving. At each time step, controller estimates the optimal control sequence from the previous time step and uses importance sampling to generate thousands of new sequences of control inputs. These sequences are propagated forward in time using system dynamics, and each trajectory is evaluated according to a set cost functions. The estimate of the optimal control sequence is then updated with a cost-weighted average over the sampled trajectories.
Our work uses this Autorally platform with handcrafted cost functions that are meticulously designed to navigate in a room with obstacle avoidance and proceeding according to the user’s intent. Specifically, we fuse desirability cost map, obstacle map and user joystick commands to produce safe and jerk free paths.
- Installation can be found here. Please follow steps all the steps to compile this code.
- Clone or Fork this Repository in a catkin_workspace
- Install AutoRally ROS Dependencies:
Within the catkin workspace folder, run this command to install the packages this project depends on.rosdep install --from-path src --ignore-src -y
- Compilation:
- Check Eigen version:
pkg-config --modversion eigen3
. If version < 3.3.5, upgrade Eigen by following "Method 2" within the included INSTALL file. - Run
catkin_make
from the workspace folder. Due to the additional requirement of ROS's distributed launch system, you must runsource src/autorally/autorally_util/setupEnvLocal.sh
- Check Eigen version:
- Launch the simulation environment: For in-depth details about simulation environment, please
refer here.
- You need to run the following commands from ApproachFinder-CV package:
roslaunch my_planning simulation.launch
- You need to run the following commands from ApproachFinder-CV package:
- Launch the nodes to communicate between MPPI controller and simulation enviromment.
- This involves: user joystick talker, robot ground state publisher, robot command publisher and a localisation
node with real-time obstacle marking and clearing. For details about these nodes please refer
the
shared_control.launch
roslaunch my_planning shared_control.launch
- This involves: user joystick talker, robot ground state publisher, robot command publisher and a localisation
node with real-time obstacle marking and clearing. For details about these nodes please refer
the
- Launch the MPPI controller node
roslaunch autorally_control standalone_path_integral_bf.launch
- Inputs:
- Use R2 shoulder button to enable user joystick commands and left axis stick to provide user intent.
- Use R1 to enable robot. This is pass the MPPI controller to robots drifferential drive motor.
If everything loads successfully, then you should be able to see:
- A Gazebo office environment with a robot spawned at (4,4)
- In RVIZ you can see votenet detected object, docking locations and desirability costmaps w.r.t to each heading bin.
- An occupancy grid map where the robot is localised and it updates its obstacle believe when the environment changes.
- You can also see nominal path generated by the controller. Once you enable the robot, it will start moving according to the given inputs.
Note: This demo requires 32GB of RAM for run successfully.
Supply floor plan to the planner: Autorally expects an initial obstalce map which it uses to initialise the controller parameters. A value of 0 defines the center of the track, and a value of 1 indicates off the track. The cost map is a simple text file with the following format:
- first entry: minimum x value
- second entry: maximum x value
- third entry: minimum y value
- fourth entry: maximum y value
- fifth entry: resolution of the grid.
The total width of the grid is (maximum x - minimum x)*resolution and similarly for the height. all remaining entries:
the grid values in row-major order. Some sample examples are kept in path_integral/params/maps
. The first task is to
create such obstacle map for your environemnt and supply to the controller.
Following steps outline this process:
- For this work, we have used laser-based SLAM to build the 2d occupancy grid map. For this, we use slam_gmapping.
- Launch the slam_gmapping and build a static map using this node.
roslaunch my_planning slam_gampping
(you might have to play around with launch file params).
- Once the map is build, store it as a .txt file further processing.
roslaunch my_planning convert_costmap_node
(please change the filename according to your convenience).
- Convert this into autorally readable format using this command:
python track_converter.py -i <map_file.txt> -d map_image.jpg -o map.npz
- Finally, copy the generated map.npz file to the '/autorally_control/src/path_integral/params/maps/'
- Replace the
map_path
param instandalone_path_integral_bf.launch
accordingly.
- Replace the
Remaining steps are same is followed in Demo Simulation.