This week’s work was mainly centered in the setup of the AtlasCar and in gathering data in the form of rosbags for testing.

Although launching the different lasers was relatively easy, the installation of cameras and the setup of the GPS and IMU combo required more work and troubleshooting, the last mainly with the transition from ROS Kinectic to ROS Melodic.

To record the rosbags, two places were chosen according their different features, one with straight road curbs and lines (Parque de Exposições) and one with inverted curbs and highway (Salinas).

In Figures 1, 2 and 3 there are examples of the performance of different edge detection filters in an inverted road curb situation. The algorithms positively identify the curbs and road limits.

In the highway situation, as predicted, the algorithm performs poorly due to lack of laser reading information on the account of high speeds.

Although this week was mainly dedicated to writing, a few improves to the grids threshold were made and some corrections to gradient, Kirsh and Prewitt filters to obtain the best grids possible.

In addiction to this, some tests were performed regarding the performance of the quantitative evaluation using different cell types to try to create a defined ROC curve. The evaluation ground truth distance of each frame was also decrease from 40m to 20m, as the limits start becoming less defined further away from the car.

Besides writing, this week was focused on improving the developed quantitative evaluation tool.

First, the ground truth grid was corrected with the right orientation. Then, the necessary was to clean the calculated limits to only contemplate the closest limits to the car. Once the main goal is to obtain the navigable limits of the road, everything besides that is unnecessary information and it was eliminated. This allows to overlay the ground truth grid with the new cleaned grid and perform a preliminary qualitative evaluation, as seen in vid. 1 (0:40min).

In general, the algorithm seems to obtain good results in the calculated limits and they mostly correspond to the real limits.

To perform a quantitative evaluation, the previously mentioned statistical measures were calculated. As discussed, this measures depends on the true and false positives and negatives. In this case, the unity was consider a grid square.

The results, for each frame that contemplates the ground truth extension, were exported to a CSV file that can be uploaded to a spreadsheet for further calculations.

Fig. 1 shows a ROC curve obtained from the Laplace filter grid.

This week’s work resulted a fully developed occupancy grid with the ground truth limits (vid. 1). This limits were imported from a .kml file created in google earth and allow to calculate the efficiency of the detected limits and compare detection methods between them.

To evaluate this efficiency, statistical measures are to be calculated, as described in previous posts. This measures will be calculated frame-by-frame and uploaded in a .csv file. This file can be imported into excel to compute graphics and evaluate the influence of several factors to be studied.

As the cells are not very precise, taking into account that the cell resolution now used is 40cm, it will be given a tolerance of one cell to each side of the car.

This week’s work was still centered on the development of a tool to preform a quantitative evaluation of the method developed. This proved to be a bigger challenge than anticipated, yet, the more plausible solution seems to remain in the conversion of a line drawn in google earth to an occupancy grid. The data on this grid, corresponding to the road limits ground truth, will then be compared to the data on the detected limits grids and statistical measures can be evaluated.

At this point, the program is able to generate .kml files with the path traveled, read .kml files produced by google earth features, extract the coordinates and calculate its distance to the car coordinates.

However, this fails on the correct orientation of frames, considering that the world frame where the latitude and longitude are oriented is not aligned to the moving_axis frame, making the ground truth limits unaligned and in the wrong orientation.

To correct this problem, a possible solution is to insert the points in a new point cloud (with z=0) and use pcl_ros::transformPointCloud to transform the points within frames (wgs84 to moving_axis). This would return a new point cloud with the necessary corrections to align the frames, that would allow to correctly place the points in the occupancy grid.

This week, the presented challenge was to develop a tool/application to help identify the road limits’ ground truth and calculate the statistical measures.

The initial idea was to develop a GTK application with three windows, with a street map to select the road limits manually, an image of the limits detected by the chosen algorithm and camera image provided by the car camera. This idea is now on stand by due to difficulties migrating the street map to an OpenCv or GTK environment.

Then the option became to draw the limits in the Google Earth application and save them in a .kml file. This file would be upload and read in the code and drawn in an occupancy grid, which would be easily compared to the other grids with the calculated limits. The main challenges in this approach is the conversion between latitude and longitude coordinates in the generated .kml file to meters.

This week, a .kml file was also created with the data from the car GPS antena. This allows to visualize the traveled path in Google Earth to later draw the road limits. Mapviz was also used to visualize the created grids and images in the correct place according to GPS data.

This week started with the conversion of the density grid to an OpenCV image. This was a big addition because it opened a lot of new doors in edge detection filters like canny and LoG.

OpenCV images are very easily manipulated and can be converted back to occupancy grids with thresholds to eliminate low-density zones that do not correspond to road limits.

Video 1 shows the result of multiple edge detection filters as well as the study of the cell resolution influence in the density grid. After multiple tests, the grid resolution was then changed from 0.2m/cell to 0.4m/cell. Video 2 shows the obtained results from this revision.

With the need to evaluate the quality of the limits identified a literature review was conducted where that the most common way to evaluate the performance of a road limits detection algorithm found is by calculating the four statistical quantifiers presented in figure 1, where the obtained results are matched to a ground-truth of about 500-100 frames. For each algorithm, these indicators should be calculated in different situations, for example, in a straight and curved road, in the presence of positive and negative road curbs.

Since there are no databases for this kind of work, road segments are to be selected and road limits manually identified to create a database for each described situation to later calculate the indicators with the number of TP (true positives), FP (false positives), TN (true negatives) and FN (false negatives).

Although being fully functional, the work developed in previous weeks needed improvement, as the program used too many computational resources and becoming increasingly slow with the addition of new features.

In that line, the code already developed was reformed to become faster and more efficient. The cube list markers were replaced with occupancy grid maps, allowing to define the probability of each grid square to be an obstacle.

The grids are basically defined by its height, width, and resolution, and allow its placement in a specific frame, in this case, the moving_axis frame, as previously decided. The occupancy maps provide better and faster visualization, being much easier to see areas with higher densities or gradients, as visible in video 1.

Also to decrease the computational resources used, instead of counting how many points were in each square, which requested N x M x np (number of points in the point cloud) iterations, now, for each point cloud point, the program calculates instead in which square it belongs. This requires np iterations, resulting in a very significant decrease in the number of iterations.

Video 1 shows that the gradient direction seems to have little significance in road limits detection, unlike the gradient magnitude and its components, where darker lines are clearly visible. It’s also worth noting that the algorithm was developed to mark positive and negative gradients as an obstacle, meeting the initial goals.

Despite the good results obtained from gradient filters, testing other solutions is important to select the best one. In that line, a new grid was created, calculating edges with a Sobel filter (fig. 1) applied to the gradient matrix. The obtained results are visible in videos 2 and 3, that show the algorithm running with different parameters.

The Sobel filter is a good edge detecting filter, and by the analysis of the video, it produces good results in edge detection in the point cloud data, although there are more defined lines when it comes to cars and buildings.

Edit: Also, Prewitt and Kirsh’s filters were tested in point cloud density edge detection. Both filters show good results detecting road limits, but still identify road as an obstacle in some cases, depending on road conditions, car velocity, and inclination. Video 4 shows the results obtained in recorded rosbag around University of Aveiro.

In the first part of the week, a method to evaluate the point cloud density inside a cuboid with a predefined size was developed. This allowed creating a grid of cuboids in which each one had its own correspondent density. The purpose was to divide the space of interest in small enough portions for later evaluate density changes.

The first necessary procedure was to transform the PointCloud to the moving_axis frame of the car, and then use the PCL CropBox tool to crop the PointCloud in the defined cuboid volume and count the numbers of points in the output cloud.

Vid. 1 shows a dynamic use of this method. For better visual outcome, each square of the grid has a color that matches a range of densities, which allows already to some understanding of what are the low and high density zones.

After the development of the density grid, followed the implementation of similar gradient grids. To each cuboid, the vertical and horizontal gradient were calculated (Gx and Gy, in vid. 2), as well as the gradient magnitude and direction.

In vid. 2 there is a representation of the four gradient grids with the color bars created (fig. 1).

One of the main challenges presented in this dissertation is how to discover something that doesn’t exist. It was relatively easy to find where the biggest concentration of points was (corresponding to positive road curbs) but, as observed in fig. 1, when talking about negative road curbs there will be a “shadow” in the point cloud, that is, an “empty space”. So how will we identify the absence of points?

After discussing what would be the best solution to solve this problem, the decision was to calculate the medium road plane, and divide the space above it into small cuboids, find the point cloud density in each one and calculate the “negative” point cloud, that is, removing points in the high density zones and inserting points in low density zones, resulting in a cloud with only points in low (or zero) density zones.

Concatenating this point cloud with the one given by the road_reconstruction_node ( the one who identifies positive obstacles) we will have a point cloud where points represent obstacles, which means known areas where the vehicle can not go because there are obstacles in that zone.

In that line, the first thing to do was to calculate the medium road plane. For that, a ransac technique was applied to the raw cloud, resulting in a cloud with the inlier points and the respective plane coefficients. That cloud is represented by the black points in vid. 1. After that, a marker with a planar shape was created and adapted to the plane coefficients calculated before, as it shows in the following video. This allowed to define the search area for the spatial partition feature.

The fact that the total accumulated point cloud of a certain path may also provide interesting information was also discussed, not also in the line of this work but also in 3D road reconstruction through point clouds. So a feature that generates a point cloud was created, to which it will constantly be added new points corresponding to laser readings obtained from the car movement. This cloud is periodically saved in a .pcd format that can be visualized in the pcl_viewer.

Fig. 2 shows the comparison between the resulting point cloud and the corresponding real circuit and the reliability of the obtained results.