Jekyll2023-01-01T23:00:11+00:00https://aravindk2604.github.io/feed.xmlAravind KrishnanA blog about technology and stuff relatedAn example to use message_filters in ROS2018-07-15T20:00:00+00:002018-07-15T20:00:00+00:00https://aravindk2604.github.io/example_of_message_filters<p>As quoted in ROS Wiki “<strong>message_filters</strong>: <em>a set of message filters which take in messages and may output those messages at a later time</em>, <em>based on the conditions that filter needs met</em>.” <a href="http://wiki.ros.org/message_filters">ROS Wiki message_filters</a></p>
<p>I assume that you are a little familiar with ROS - Robot Operating System and you know how to create a ROS package, build it and run the nodes.</p>
<h2 id="system-details-are">System details are:</h2>
<ul>
<li>ubuntu 16.04</li>
<li>ROS Kinetic</li>
<li>code in C++</li>
</ul>
<p>This blog will bluntly explain a simple example to use message_filters in ROS, specifically the <strong>Policy-Based Synchronizer</strong> which syncs (approximate time) two nodes based on the <strong>Approximate Time Policy</strong>. The test workspace has a package named <code class="language-plaintext highlighter-rouge">learn_msg_filter</code> with three cpp files.</p>
<ul>
<li>firstNode.cpp - publisher</li>
<li>secondNode.cpp - publisher</li>
<li>combinedNode.cpp - message_filter subscriber</li>
</ul>
<p>For those who are here to understand directly from the code, visit my <a href="https://github.com/aravindk2604/test_ws">git repo</a></p>
<p>The intention here, is to experiment by creating custom messages in ROS and try to use <code class="language-plaintext highlighter-rouge">std_msgs/Header</code> to work with <code class="language-plaintext highlighter-rouge">message_filters</code> to sync two ROS nodes that run at different frequencies (5Hz and 20Hz).</p>
<p>The steps to follow are:</p>
<ul>
<li>Create a ROS workspace, a package (eg. <em>learn_msg_filter</em>)</li>
<li>Create a custom msg, for instance <em>NewString.msg</em></li>
<li>Write two separate nodes that publish data</li>
<li>Write one node that subscribes to the two publishers, using message_filters</li>
</ul>
<h2 id="create-a-custom-msg">Create a custom msg</h2>
<p>After you create a ROS workspace and a package, you start by creating your own msg. If you are not aware of how to create a custom message in ROS then you can learn from here - <a href="http://wiki.ros.org/ROS/Tutorials/CreatingMsgAndSrv">How to create msg in ROS</a>.</p>
<p>The contents of <em>NewString.msg</em> are:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>std_msgs/Header header
string st
</code></pre></div></div>
<blockquote>
<p>The <code class="language-plaintext highlighter-rouge">std_msgs/Header</code> is the datatype that consists of the timestamp and quite important for <em>message_filters</em>.</p>
</blockquote>
<p>The contents of <code class="language-plaintext highlighter-rouge">std_msgs/Header</code> are:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>uint32 seq
time stamp
string frame_id
</code></pre></div></div>
<p>The <em>message_filters</em> check the <strong>stamp</strong> timestamp variable that consists of seconds and nanoseconds. If this value is equal or agreeably equal (according to approximate time policy) then the callback mechanism is executed and it implies that the nodes in consideration are in sync. You will understand better when I explain the code.</p>
<p>Assuming you created your own message, the <em>catkin_make</em> command would have created a <em>.h</em> file inside the path <em>~/your_workspace/devel/include/NewString.h</em>. This is the file that you should inlude in the nodes that you create.</p>
<h2 id="write-two-publisher-nodes">Write two publisher nodes</h2>
<ul>
<li>The first node publishes a string <em>“Hello “</em> at 5Hz and the second node publishes a string <em>“world!”</em> at 20Hz. This string is part of the <em>NewString.msg</em> that I created.</li>
</ul>
<p>The <em>header</em> part of this custom msg has three parts as mentioned above.</p>
<ul>
<li>The <em>uint32 seq</em> is auto generated by ROS and it is a continuous number.</li>
<li>The <em>stamp</em> was assigned the value using <strong>ros::Time::now()</strong> which fills in the <em>seconds</em> and the <em>nanoseconds</em>.</li>
<li>The <em>frame_id</em> was arbitrarily named <strong>/myworld</strong> for the first node and as <strong>/robot</strong> for the second node.</li>
</ul>
<p><strong>message_filters</strong> sees its usage greatly while dealing with the image and point cloud data and the official ROS Wiki describes an example similarly. Thus the <em>frame_id</em> in the <em>header</em> datatype might get names like <em>/world_frame</em> , <em>/robot_frame</em>, <em>/camera_frame</em> and so on.</p>
<h2 id="first-node">First node</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#include "ros/ros.h"
#include <sstream>
#include <bits/stdc++.h>
#include "learn_msg_filter/NewString.h"
int main(int argc, char** argv) {
ros::init(argc, argv, "firstNode");
ros::NodeHandle nh;
ROS_INFO_STREAM("First node started.");
ros::Publisher pub = nh.advertise<learn_msg_filter::NewString>("chatter", 5);
ros::Rate loop_rate(5);
while(ros::ok()) {
learn_msg_filter::NewString msg;
msg.header.stamp = ros::Time::now();
msg.header.frame_id = "/myworld";
std::stringstream ss;
ss << "hello ";
msg.st = ss.str();
pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
</code></pre></div></div>
<blockquote>
<p>The important thing to note is that I have included the header file of the custom message <strong>learn_msg_filter/NewString.h</strong> so that I can use it to declare a custom variable <strong>learn_msg_filter::NewString msg</strong>.</p>
</blockquote>
<h2 id="second-node">Second node</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#include "ros/ros.h"
#include <bits/stdc++.h>
#include <sstream>
#include "learn_msg_filter/NewString.h"
int main(int argc, char** argv) {
ros::init(argc, argv, "secondNode");
ros::NodeHandle nh;
ROS_INFO_STREAM("Second Node started");
ros::Publisher pub = nh.advertise<learn_msg_filter::NewString>("anotherChatter", 5);
ros::Rate loop_rate(20);
while(ros::ok()) {
learn_msg_filter::NewString msg;
msg.header.stamp = ros::Time::now();
msg.header.frame_id = "/robot";
std::stringstream ss;
ss << "world!";
msg.st = ss.str();
pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
</code></pre></div></div>
<h2 id="combined-node">Combined node</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#include "ros/ros.h"
#include <sstream>
#include <bits/stdc++.h>
#include <message_filters/subscriber.h>
#include <message_filters/synchronizer.h>
#include <message_filters/sync_policies/approximate_time.h>
#include "learn_msg_filter/NewString.h"
#include <std_msgs/String.h>
using namespace message_filters;
void callback(const learn_msg_filter::NewString::ConstPtr& f1,
const learn_msg_filter::NewString::ConstPtr& s1) {
std_msgs::String out_String;
out_String.data = f1->st + s1->st;
ROS_INFO_STREAM(out_String);
}
int main(int argc, char** argv) {
ros::init(argc, argv, "combinedNode");
ros::NodeHandle nh;
message_filters::Subscriber<learn_msg_filter::NewString> f_sub(nh, "chatter", 1);
message_filters::Subscriber<learn_msg_filter::NewString> s_sub(nh, "anotherChatter", 1);
typedef sync_policies::ApproximateTime<learn_msg_filter::NewString, learn_msg_filter::NewString> MySyncPolicy;
Synchronizer<MySyncPolicy> sync(MySyncPolicy(10), f_sub, s_sub);
sync.registerCallback(boost::bind(&callback, _1, _2));
ROS_INFO_STREAM("checking ...");
ros::spin();
return 0;
}
</code></pre></div></div>
<p>Please note that the <em>Subscriber</em> here is a method of <strong>message_filters</strong> and takes in the custom message <em>learn_msg_filter::NewString</em>. This code subscribes to the two publishers and <em>ApproximateTime Policy</em> is used as a synchronizer to sync them. <em>ExactTime Policy</em> doesn’t work here because the timestamps of the nodes that are to be synced must be the same. Since we deliberately publish the data at different rates - 5Hz and 20Hz, <em>ApproximateTime Policy</em> was used as a demonstration.</p>
<p>The <strong>callback</strong> function simply concatenates the string data from the two subscribed nodes and is output as a confirmation that the first two nodes are synced.</p>
<h2 id="run-the-code">Run the Code</h2>
<h3 id="terminal-1">Terminal 1</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>roscore
</code></pre></div></div>
<h3 id="terminal-2">Terminal 2</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git@github.com:aravindk2604/test_ws.git
cd ~/test_ws/
catkin_make
source devel/setup.bash
roslaunch learn_msg_filter combined.launch
</code></pre></div></div>
<p>Output</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SUMMARY
PARAMETERS
* /rosdistro: kinetic
* /rosversion: 1.12.13
NODES
/
firstNode (learn_msg_filter/firstNode)
secondNode (learn_msg_filter/secondNode)
ROS_MASTER_URI=http://192.xxx.x.xxx:11311
process[firstNode-1]: started with pid [7957]
process[secondNode-2]: started with pid [7958]
</code></pre></div></div>
<h3 id="terminal-3">Terminal 3</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd ~/test_ws/
source devel/aetup.bash
rosrun learn_msg_filter combinedNode
</code></pre></div></div>
<p>Output</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[ INFO] [1531778649.755397007]: checking ...
[ INFO] [1531778650.233821710]: data: hello world!
[ INFO] [1531778650.243030068]: data: hello world!
[ INFO] [1531778650.443058176]: data: hello world!
[ INFO] [1531778650.643062540]: data: hello world!
[ INFO] [1531778650.842686741]: data: hello world!
[ INFO] [1531778651.043033154]: data: hello world!
[ INFO] [1531778651.243016201]: data: hello world!
[ INFO] [1531778651.443006341]: data: hello world!
</code></pre></div></div>
<p>Here are the output from the two publisher nodes:</p>
<h3 id="first-node-output">First node output</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>header:
seq: 1800
stamp:
secs: 1531779001
nsecs: 433235260
frame_id: "/myworld"
st: "hello "
---
header:
seq: 1801
stamp:
secs: 1531779001
nsecs: 633267250
frame_id: "/myworld"
st: "hello "
---
header:
seq: 1802
stamp:
secs: 1531779001
nsecs: 833271380
frame_id: "/myworld"
st: "hello "
---
header:
seq: 1803
stamp:
secs: 1531779002
nsecs: 33272517
frame_id: "/myworld"
st: "hello "
</code></pre></div></div>
<h3 id="second-node-output">Second node output</h3>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>header:
seq: 9341
stamp:
secs: 1531779108
nsecs: 492513964
frame_id: "/robot"
st: "world!"
---
header:
seq: 9342
stamp:
secs: 1531779108
nsecs: 542521933
frame_id: "/robot"
st: "world!"
---
header:
seq: 9343
stamp:
secs: 1531779108
nsecs: 592509847
frame_id: "/robot"
st: "world!"
---
header:
seq: 9344
stamp:
secs: 1531779108
nsecs: 642521959
frame_id: "/robot"
st: "world!"
---
header:
seq: 9345
stamp:
secs: 1531779108
nsecs: 692493104
frame_id: "/robot"
st: "world!"
---
header:
seq: 9346
stamp:
secs: 1531779108
nsecs: 742521642
frame_id: "/robot"
st: "world!"
---
header:
seq: 9347
stamp:
secs: 1531779108
nsecs: 792529680
frame_id: "/robot"
st: "world!"
---
header:
seq: 9348
stamp:
secs: 1531779108
nsecs: 842525970
frame_id: "/robot"
st: "world!"
</code></pre></div></div>
<blockquote>
<p>The output from the nodes were recorded at different time and thus the timestamps will vary greatly. Ideally, logging the output from these two nodes and then fetching the data from that log file would yield possible sync in timestamps between the two nodes.</p>
<p>Also note the difference in output rate is clearly higher with the second node because it runs at 20Hz, whereas the first node runs at 5Hz.</p>
</blockquote>
<p>Suggestions to improve this example are most welcome. This was an attempt for me to understand and use message_filters for a datatype different from the commonly used <em>Image</em> and <em>CameraInfo</em> topics under <em>sensor_msgs</em> as given in the official ROS Wiki page.</p>aravind krishnanAs quoted in ROS Wiki “message_filters: a set of message filters which take in messages and may output those messages at a later time, based on the conditions that filter needs met.” ROS Wiki message_filtersFinding Lane Lines on a Highway2018-01-03T22:10:00+00:002018-01-03T22:10:00+00:00https://aravindk2604.github.io/p1_lane_lines<h1 id="finding-lane-lines-on-the-road"><strong>Finding Lane Lines on the Road</strong></h1>
<p>Self-driving vehicle technology is inevitable and there are many players out there racing to get to the different levels of autonomy. To be a part of this technological disruption, there are a few fields that one can focus to excel at - computer-vision, sensor-fusion, localization, planning and controls.</p>
<hr />
<h3 id="reflection">Reflection</h3>
<p>This first project of Udacity’s SDCND is to recognize lane-lines on a freeway in a video and overlay them with a continuous (red or any preferred color) line using computer vision techniques. The computer vision techniques were applied by using the Open Computer Vision (OpenCV) library. The project was written in Python.</p>
<p>This project gave me a good exposure to understand the basics of computer vision and more importantly apply them in a real project. I would like to explain in detail the steps I followed/took to complete this project.</p>
<h3 id="the-image-processing-pipeline">The Image processing pipeline</h3>
<p>The project has a template that leads one to achieve this goal as a step-by-step process. In this template I worked on an image processing pipeline which is:</p>
<ul>
<li>Import a sample image and pre-process it</li>
<li>Pre-processing includes - convert to grayscale, apply gaussian smoothing</li>
<li>Apply Canny transform as an edge detection technique</li>
<li>Identify region of interest in the edge detected image</li>
<li>Apply Probabilistic Hough Transform</li>
<li>Overlay continuous lines on the original image</li>
</ul>
<p>Why pre-processing of the Image?<br />
It can be assumed that these images are taken from a camera that is on the dashboard of the vehicle. It is susceptible to physical perturbances and other kinds of image noise. To efficiently run the lane finding algorithm, these pre-processing techniques will aid in the fidelity of the algorithm’s output.</p>
<p>The sample image after applying grayscale filtering is shown below.</p>
<p><img src="https://raw.githubusercontent.com/aravindk2604/aravindk2604.github.io/master/assets/images/blog_imgs/grayscale.png" alt="" /></p>
<p>Grayscale filtering helps to enhance the gradient changes in the image and in our case the lane lines are white on the black surface.</p>
<p>Next, a gaussian smoothing is applied to eliminate image noise. I tried different kernel sizes and understood that the other parameters later in the pipeline also gets affected by this kernel size.</p>
<p>To detect edges in the image the canny transform is applied and the edges are determined between two parameters - low and high threshold values. Here is the output of the gaussian smoothened image after canny transform.</p>
<p><img src="https://raw.githubusercontent.com/aravindk2604/aravindk2604.github.io/master/assets/images/blog_imgs/canny_transform.png" alt="Canny Transform" /></p>
<p>Now, the edges in the whole image is detected but we are concerned only about the lane lines. So, a region of interest mask (typically a polygon is helpful) can be applied to filter only the lane lines. The probabilistic hough transform does the job.</p>
<p><img src="https://raw.githubusercontent.com/aravindk2604/aravindk2604.github.io/master/assets/images/blog_imgs/hough_transform.png" alt="Hough Transform" /></p>
<h3 id="the-draw_lines-function">The draw_lines() function</h3>
<p>The draw_lines() function originally draws lines based on the canny edge detected image. But there are a few techniques used here to actually extrapolate the lines.
I modified it and named it draw_lines_extrapolated().</p>
<h4 id="decide-based-on-slope-value">Decide based on Slope value</h4>
<p>The idea is to categorize the edges obtained into two sides - right and left lines. The slope of these two lines are different and a good deciding factor. The slope is negative for the left lane line and positive for the right.
The slope and the intercept for each line detected by the hough transform is calculated using the <code class="language-plaintext highlighter-rouge">np.polyfit()</code> function.</p>
<h4 id="calculate-x1-and-x2-points">Calculate x1 and x2 points</h4>
<p>The x1, x2 points for both left and right lane lines are calculated based on the simple straight line equation <em>y = mx + c</em>. They are stored in a list and later a mean is calculated using <code class="language-plaintext highlighter-rouge">np.nanmean()</code>. The y1, y2 points are nothing but the two limits of the polygon mask that we used earlier in the image processing pipeline. Finally, there are four points - <em>(x1 , y1), (x2, y2)</em> for each lane line and in total eight points to draw two extrapolated lines for every image.</p>
<h4 id="draw-the-lines">Draw the lines</h4>
<p>I went ahead with the <code class="language-plaintext highlighter-rouge">cv2.line()</code> function to draw the final extrapolated right and left line. This worked perfectly for every image that I tested. But it had some glitches when I tested it on the video.</p>
<p>The final image that confirms the image processing pipeline.</p>
<p><img src="https://raw.githubusercontent.com/aravindk2604/aravindk2604.github.io/master/assets/images/blog_imgs/final_lane_marking.png" alt="Final Lane Markings" /></p>
<h4 id="change-parameters-for-the-video-processing">Change parameters for the video processing</h4>
<p>The parameters that define the probabilistic hough transform for a single image is different from that for a video.</p>
<p>For 1 image: <code class="language-plaintext highlighter-rouge">houghTransform(roi_img, rho=2, theta=(np.pi/180), threshold=18, min_line_len=50, max_line_gap=4)</code></p>
<p>For the video, the above parameters did not work and the extrapolated lines that were overlaying on the video were quite shaky and not consistent. Even after taking a mean of all the x points to draw the line this issue persisted.
I tried various values and then obtained the following.</p>
<p>For the video: <code class="language-plaintext highlighter-rouge">houghTransform(roi_img, rho=2, theta=(np.pi/180), threshold=55, min_line_len=40, max_line_gap=100)</code></p>
<p>The <strong>threshold, min_line_len and max_line_gap</strong> parameters are now different. I obtained these values after a lot of trial and error and it resulted in a stable and consistent overlay of the extrapolated lane lines.</p>
<h3 id="2-identify-potential-shortcomings-with-your-current-pipeline">2. Identify potential shortcomings with your current pipeline</h3>
<ul>
<li>White and yellow lines are detected in the first two videos but not in the challenge video which has tree shades obstructing the clarity of the lines.</li>
<li>Another potential shortcoming is that the same parameters while detecting and drawing lane lines on a image vs on a video is not the same.
This tuning can only be done on a trial and error basis which is certainly not the efficient way for when it has to work on a vehicle.</li>
</ul>
<h3 id="3-suggest-possible-improvements-to-your-pipeline">3. Suggest possible improvements to your pipeline</h3>
<p>So, the current pipeline works for the white and yellow lines video but not on the challenge video. This is actually due to different lighting conditions exposed to the camera (shades) in the challenge video.</p>
<p>Possible improvement can be to check different color spaces like HSV (Hue, Satutation, Value) and HSL (Hue, Satutation, Lightness) and use them to filter the yellow and the white lines instead of grayscaling.</p>aravind krishnanFinding Lane Lines on the Road