Who wants to build a robot? Robots can be expensive, time consuming and challenging to get right.
For those of us who want to focus on the software systems without having to build a robot, there are simulators available that can help with development. One of these simulators is Gazebo. It has an interface to ROS which makes development of control software easy. In this post, we'll show you how how you can get started with Gazebo and ROS to experiment with robotic software.
ROS is middleware designed to control robotic systems. Before ROS roboticists often had to reinvent the wheel and write a whole software system from scratch. With ROS it is easier to reuse components from other robotic systems, including the ones that other people wrote. This gives you access to general functionality such as SLAM, and inverse kinematics.
The Open Source Robotics Foundation (OSRF) maintains the base infrastructure of ROS and leaves the development and testing of additional functionality to the third-party developers. his way packages can be individually tested, and more time is left to the robotics engineer to experiment. However, ROS takes some time to learn. Especially making sense of all the dependencies and interactions can be complicated. This page will get you started with ROS to run on your PC with gazebo, as well as giving some guidance about how to go about debugging when you are developing a ROS package.
First things first, let’s install ROS. Here we will be using "ROS kinetic kame" (released: May 23rd, 2016, at the time of writing we installed version: 1.12.7). Make sure you have an operating system that is supported with ROS (for kinetic, as we will assume here, this is Ubuntu 15.04 or 16.04). To install ROS do the following:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116 sudo apt-get update sudo apt-get install ros-kinetic-desktop-full sudo rosdep init rosdep update source /opt/ros/kinetic/setup.bash sudo apt-get install python-rosinstall python-rosinstall-generator python-wstool build-essential
In this, rosdep will ensure all package dependencies are installed. In general you can use rosdep to install rospackages and to install all their dependencies. To make sure you can develop your own packages as well, you need to set up a catkin workspace:
rosdep
rospackages
mkdir -p ~/catkin_ws/src cd ~/catkin_ws/ catkin_make source devel/setup.bash
Catkin is the build system of ROS and it combines CMake with python, to create a build system that would make cross compiling easier. Catkin (and catkin_make) should have been installed automatically when you installed ROS. To see if things are working, run:
catkin_make
roscore
This will now display the ROS version and all active ROS related services. Now you can start and use ROS.
ROS consists out of many packages and modules. These modules can interact with packages via message passing. This makes it relatively easy to connect your module with another module, but it makes it hard to see what is happening on the inside. Debugging can get complicated pretty quickly, and to assist with that there are a couple of useful functions that can be used. For a full overview, please refer to the official ROS documentation or to the cheat sheets available here.
Navigation
rospack
rospack find [package_name]
rospack depends
rospack list
roscd
rosls
ROS interacts via message passing. Having insight in those messages will help you to debug behavior. The command rostopic is very useful to get insight in the different topics that are currently active.
rostopic
rostopic bw
rostopic echo
rostopic find
rostopic hz
rostopic info
rostopic list
rostopic pub
rostopic type
For more information about the messages you can use:
rosmsg show
rosmsg list
rosmsg package
rosmsg packages
With all these commands you can use tab completion to find packages, topics or messages that you would like to use. It is also possible to pipe topic commands to msg comands (eg: "rostopic type rosout | rosmsg show" will show you the format of the message type in the rosout package).
rostopic type rosout | rosmsg show
Now that we can work with ROS, let’s have a look at gazebo. Gazebo comes with several models from itself, but you can develop, download and install additional ones as well.
To simulate a robotic arm we need to make sure we have a gazebo model. We will make use of the katana robotic arm. You can install the basic functionality of the robotic arm by doing the following:
sudo apt-get install ros-kinetic-katana-driver ros-kinetic-katana-arm-gazebo ros-kinetic-katana-gazebo-plugins
However, if you plan on modifying the files (which we will in the next section), it is better to check out the repo here, and install it via catkin_make. You should now have the package katana_arm_gazebo, now do:
katana_arm_gazebo
export KATANA_TYPE="katana_450_6m90a" roslaunch katana_arm_gazebo katana_arm.launch
This should launch gazebo. You now see the following:
Gazebo allows you to visualize many things, such as inertia, forces and sensor information. You can apply forces on joints, drag the sidebar on the right hand side out to see the forces view. Select under "model" in the left hand plane, "katana". You should now have this view:
On the right hand side of the window you can now fill in forces and velocities per joint. When you do that you should see the arm moving.
Let’s take a closer look at extending the simulation
The model of the arm itself is described in an URFD file, or a xacro file. This is an xml file containing the dimensions of all the links, connected together by joints. If we want to extend the model with anything we need to think about how to attach it. As example, we extend the robotic arm with a laser range finding module. Append the katana_description/urdf/gazebo.urdf.xacro with the following:
katana_description/urdf/gazebo.urdf.xacro
<gazebo reference="hokuyo_link"> <sensor type="ray" name="head_hokuyo_sensor"> <pose>0 0 0 0 0 0</pose> <visualize>true</visualize> <update_rate>40</update_rate> <ray> <scan> <horizontal> <samples>720</samples> <resolution>1</resolution> <min_angle>-1.570796</min_angle> <max_angle>1.570796</max_angle> </horizontal> </scan> <range> <min>0.10</min> <max>30.0</max> <resolution>0.01</resolution> </range> <noise> <type>gaussian</type> <!-- Noise parameters based on published spec for Hokuyo laser achieving "+-30mm" accuracy at range < 10m. A mean of 0.0m and stddev of 0.01m will put 99.7% of samples within 0.03m of the true reading. --> <mean>0.0</mean> <stddev>0.01</stddev> </noise> </ray> <plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so"> <topicName>/rrbot/laser/scan</topicName> <frameName>hokuyo_link</frameName> </plugin> </sensor> </gazebo>
And extend katana_description/urdf/katana_450_6m90.urdf.xacro with:
katana_description/urdf/katana_450_6m90.urdf.xacro
<joint name="hokuyo_joint" type="fixed"> <axis xyz="0 1 0" /> <origin xyz="0 0 0" rpy="0 0 0"/> <parent link="katana_base_link"/> <child link="hokuyo_link"/> </joint> <!-- Hokuyo Laser --> <link name="hokuyo_link"> <collision> <origin xyz="0 0 0" rpy="0 0 0"/> <geometry> <box size="0.1 0.1 0.1"/> </geometry> </collision> </link>
What this does is add an additional joint and link that connects to the katana base link (the base). It will visualize the laser range finder with blue lines, so that you can see what it will measure.
Now that the laser scanner has been added, we can find out how it works.
"rostopic echo /rrbot/laser/scan" will now give back the output of this sensor; this will look like the following:
rostopic echo /rrbot/laser/scan
It reports back all the distances it has measured. This gives a good insight in what to expect when designing and writing the code in your package that should handle these messages.
Now that we know our way around ROS, Gazebo as well as how to extend the models, we can write and develop control software.
Now that we have an arm that we can simulate, but we now need to write a ROS module to control it. For this we need to create a ROS package.
catkin_create_pkg <package_name> [depend1] [depend2] [depend3] cd ~/catkin_ws catkin_make . ~/catkin_ws/devel/setup.bash
Let's create a test package with dependencies to the rospy and roscpp packages. These packages allow you to interface with ROS via C++ and python. std_msg allows you to send messages to other ROS modules. You can add as many dependencies as that you need for your module. For this we do:
catkin_create_pkg test_package std_msgs rospy roscpp
Now open "CMake.list", you will see the following:
CMake.list
find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs )
Here is where the dependencies are listed. Further down the file you see where other resources are included:
## Add cmake target dependencies of the library ## as an example, code may need to be generated before libraries ## either from message generation or dynamic reconfigure # add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) ## Declare a C++ executable ## With catkin_make all packages are built within a single CMake context ## The recommended prefix ensures that target names across packages don't collide # add_executable(${PROJECT_NAME}_node src/test_package_node.cpp) ## Rename C++ executable without prefix ## The above recommended prefix causes long target names, the following renames the ## target back to the shorter version for ease of user use ## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node" # set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
You will also find a package.xml file, which is the manifest of the package. It will contain information about the author, license, as well as the package dependencies.
And that's the minimum that makes up a package. Now you can start adding code. To make sure you can launch the functionality of the package, you will need to add a launch file. the content of the launch file will look something like this (example from different package):
<?xml version="1.0"?> <launch> <node pkg="teleop_gazebo" name="follow_set_joint" type="follow_set_joint" /> </launch>
pkg
name
type
Additional information can be found here and here.
So, how does one interact with the simulation model we just set up? For this we need to look at what the messages that are send to the module actually look like. You can use the rostopic and rosmsg commands that we described before to do this.
Interaction typically entails registering a callback function at a node handler, which is explained next.
We need two callback routines: one is for the laser scanner and one for the joints.
To register a callback routine we need to subscribe its function with ROS. We do that via the node handle. This subscription should consist of:
For the laser scanner and the arm joints these look like the following:
joint_state_sub_ = nh_.subscribe("/joint_states", 1, &CallBackFunctionForArm, this); scanSub = nh_.subscribe<sensor_msgs::LaserScan> ("/rrbot/laser/scan",10, &CallBackFunctionForLaser,this);
Every time a message is updated the callback routines will be called.
Communication is done via messages. It is important to understand what those messages contain before we can understand what we can do with it.
Laser
std_msgs/Header header
:
float32 angle_min
float32 angle_max
float32 angle_increment
float32 time_increment
float32 scan_time
float32 range_min
float32 range_max
float32[] ranges
float32[] intensities
Joints
string[] name
float64[] position
float64[] velocity
float64[] effort
There are multiple ways to make the translation from the sensor data to positions, and from positions to joint angles. We take a straightforward approach, which may not be optimal, but should be relatively easy to understand and implement.
From the set of distance measurements we can find the objects around the arm. Each object is represented by multiple measurements. Any measurement that returns a value between the minimum value and the maximum value has found an object. We assume that when multiple successive measurements satisfy this, they belong to the same object. This means that at least one sample is measured outside those bounds between objects (i.e. the objects will not be overlapping).
Since the objects are round, we can take the middle of each object as point to aim for. The distance to this point can be calculated as the average between the rightmost observed distance and the leftmost observed distance.
double location = (max_object + min_object)/2; double distance = (scan->ranges[max_object] + scan->ranges[min_object])*1000/2; location = location * scan->angle_increment + scan->angle_min;
Taking the sine of the angle of the sample, and multiplying it with the distance should then result in the x coordinate. The same procedure with the cosine would give us the y coordinate. Any value between 0 and the height of the object would be acceptable for the z coordinate.
pos.x = sin(location) * distance; pos.y = cos(location) * distance; pos.z = 2;
This process can be repeated until all objects are found.
Recall from geometry lessons that for any triangle:
c2 = a2 + b2 – 2 * a * b * cos (δ)
Where “c” is the length of the side opposite of angle δ.
The general idea is to split the arm up in triangles like so:
Now when we know the dimensions, solving it should be easy. The dimensions of the arm are as follows:
const size_t height = 202; const size_t link_a = 190; const size_t link_b = 139; const size_t link_c = 147; const size_t link_d = 151; const size_t link_bc = link_b + sqrt(link_c*link_c + link_d*link_d);
Where link BC is the imaginative dashed line that follows in the direction of link B to the gripping point.
To calculate the distance to the object we will first calculate the top view distance to the object, and then the real distance from the top of the base of the arm to the grab point of the object.
//distance joint to the target: float distance_top = sqrt(goal[0]*goal[0] + goal[1]*goal[1]) + 30; float distance = sqrt(distance_top*distance_top + (height - goal[2])*(height - goal[2]));
Because of the triangle system we use, the arm will define the height of the base as 0 point. This means that we need to correct for that. We refer to this correction as “dip”.
double dip = acos(distance_top/distance); //dip is what the distance should dip to from the base of the arm
Of course this “dip” can be lift as well if the point the end effector needs to go to is higher. We use the convention that for higher points, dip will be negative.
if (goal[2] > height) dip = - dip;
Now we calculate the rotation the arm needs to do to align with the object. This can be in four different quadrants, and the calculation of the angle depends on which quadrant we are in.
//calculate the angle in which the first joint should turn if (goal[1] > 0) pos[0] = atan(goal[0]/goal[1]); else if (goal[0] < 0) pos[0] = atan(goal[0]/goal[1]) - PI; else pos[0] = atan(goal[0]/goal[1]) + PI;
Next joint is the first lift joint, and the one responsible for the dip and the alpha in the triangle.
double alpha = acos((link_a * link_a + distance * distance - link_bc * link_bc) / (2 * link_a * distance)); pos[1] = alpha - dip;
The second lift joint needs to make the angle that is pi minus the angle in the triangle we are making.
double beta = acos((link_a * link_a + link_bc * link_bc - distance * distance)/(2 * link_a * link_bc)); pos[2] = - (PI - beta);
Because the end effector is at a 90 degree angle to the rest of the arm, we need to correct for this.
pos[3] = atan(link_d/link_c);
We are not using the final rotation joint, so we make sure it is in the position we expect.
pos[4] = 0;
Now all the joints are calculated the message can be send using the following code:
const size_t NUM_JOINTS = 5; trajectory_msgs::JointTrajectory trajectory; for (ros::Rate r = ros::Rate(10); !got_joint_state_; r.sleep()) { ROS_DEBUG("waiting for joint state..."); if (!ros::ok()) exit(-1); } // First, the joint names, which apply to all waypoints trajectory.joint_names = joint_names_; trajectory.points.resize(2); // trajectory point: int ind = 0; trajectory.points[ind].time_from_start = ros::Duration(5 * ind); trajectory.points[ind].positions.resize(NUM_JOINTS); trajectory.points[ind].positions = current_joint_state_; ind++; trajectory.points[ind].time_from_start = ros::Duration(5 * ind); trajectory.points[ind].positions = pos; control_msgs::FollowJointTrajectoryGoal goal; goal.trajectory = trajectory; // When to start the trajectory: 1s from now goal.trajectory.header.stamp = ros::Time::now() + ros::Duration(1.0); traj_client_.sendGoal(goal);
If we move in straight lines to the next object we will knock things over. So we need to put steps in between. Plotting a full and smooth trajectory would be too difficult for our current purposes. We want to pick points that steer away clearly from any obstacle. This means when we approach an object, we will approach it from a distance over a straight line between the base of the arm and the object. Then when picking it up, we will lift it higher than the top of the stack. When we have placed the object on the stack, we will back away in a straight line to the base of the arm, and then move to the next object.
With all these components you can develop your code to make the arm localize and pick up an object. Unfortunately the physics engine is not optimized for grasping. When you try it, you will see that the gripper will either push over the can, or lift it up briefly, after which the can drops down. There are a couple of ways to work around these types of issues. One is to extend the models with appropriate friction models that help prevent the object from slipping. This is very tricky to get right, and beyond the scope of this tutorial. You are more likely to want an alternative solution that just works. The (albeit slightly hacky) way to deal with it, is to attach the object to the arm, as soon as two opposite forces collide with the object on opposing sides.
Luckily someone (Jennifer Buehler) has written a ROS module for this, you can get it here (Please do check the license with the policy of your organization before downloading to see if you are allowed to download this IP): https://github.com/JenniferBuehler/gazebo-pkgs.git
Clone the repository in your catkin workspace, and then make your workspace again with catkin_make.
Extend the katana_description/urdf/gazebo.urdf.xacro
<gazebo> <plugin name="gazebo_grasp_fix" filename="libgazebo_grasp_fix.so"> <arm> <arm_name>katana</arm_name> <gripper_link>katana_r_finger_link</gripper_link> <gripper_link>katana_l_finger_link</gripper_link> <palm_link>katana_motor5_wrist_roll_link</palm_link> </arm> <forces_angle_tolerance>100</forces_angle_tolerance> <update_rate>4</update_rate> <grip_count_threshold>4</grip_count_threshold> <max_grip_count>8</max_grip_count> <release_tolerance>0.005</release_tolerance> <disable_collisions_on_attach>false</disable_collisions_on_attach> <contact_topic>__default_topic__</contact_topic> </plugin> </gazebo>
This does the following:
arm_name
gripper_link
palm_link
forces_angle_tolerance
update_rate
grip_count_threshold
release_tolerance
Robots and automated systems are increasingly present in industry and in our lives. In the next 5 years, the robotics market is expected to grow to over 200 billion dollars worldwide. Still there are complex physical and computational problems that need to be solved
Robots are complex to design and more tools are needed to make this process easier. The physical world requires real-time responses to sensor data. Where actuators are involved, you need to have a good understanding about the domain in which they are working to know how they behave. Challenges with experiments in the real world are that the equipment is expensive, and that there is a real risk of damaging the environment in which it is operating. First prototyping in simulation seems like a logical step.
As we have seen in the previous blogs, simulation does give a nice behavioral view of the robot. However, as any model, it abstracts away from real life details. This may cause the model to behave unexpectedly:
Or even in contradiction with the laws of physics:
This is in no way a criticism of Gazebo. Just an illustration of why simulation is just a tool, that needs to be used with the appropriate amount of consideration. Every simulator is an abstraction of reality and thus has abstracted away some of the details as a result.
The simulation gap is a well-studied phenomenon, and what it illustrates is that hardware studies are still vital to get a real world understanding of what happens. This is again not a shortcoming of the simulator, and should come as no surprise to anyone that works with simulators. Both noise modelling in the simulation, as well as a close one-to-one mapping to a real-world platform, are vital for predictable real world behavior.
Even though the physics simulator is not perfect, having a physics view of the world is critical for developing software for robotic systems. Hardware and software can be co-designed, and benefit from findings during design time on either end.
A question that remains for future investigation is: Will the simulation become more accurate if the compute platform is more accurately modelled? Currently all software development is done on a desktop which runs the simulator. This is in no way a realistic representation of the platform. Imagine that we link the physics simulator with an emulator or a functional simulator and synchronize the time. How much would that help to close the simulation gap? While this will not fully close the gap, it may enable more real time experiments with prototype compute solutions. This in turn will help the design process, to make a fully integrated platform.
Having this will also allow developers work on software updates for existing platforms and enable them to do extensive testing. In a way this argument resonates well with what efforts have happened in the industry before. Smartphone software development would not have been where it is now, without emulators. A similar solution is needed for devices operating in the physical world. For code cannot be conned, just like physics cannot be fooled.
This runs on kinetic.Could you upload a version for melodic
When entering the command:roslaunch katana_arm_gazebo katana_arm.launch, the folowwing message appears:Could not load controller 'joint_state_controller' because controller type 'joint_state_controller/JointStateController' does not exist.[ERROR] [1558433279.342465340, 0.483000000]: Use 'rosservice call controller_manager/list_controller_types' to get the available types[ERROR] [1558433280.343720, 1.483000]: Failed to load joint_state_controller, the katakana arm does open in gazebo but the forces view does not appear, what can I do? Thank you.