top of page

Software

We have developed all of our software from ground up. We've focused mainly on three mayor aspects: modular infrastructure, computer vision robustness and finally real-time processing. The software is built upon the Raspbian (Debian 7 Wheezy port) GNU/Linux operating system. All custom software is written in C/C++.

ONBOARD PROCESSING

JDEROBOT

CONTROL SYSTEM

COMPUTER VISION

MISSION PLANNER

ONBOARD PROCESSING

SIMULATION AND GUI

The software on the PERAL2013 is powered by 8 Raspberry Pi Model-B Rev2 single board computers linked together as a distributed multiprocessing computer environment. Each Raspberry Pi has a Broadcom BCM2835 system on a chip (SoC), which includes an ARM1176JZF 700 MHz processor, Videocore IV GPU and 512 MB.

JDErobot is an open-source (GPL and LGPL) software development suite for robotics, home-automation and computer vision applications used in teaching and research. It is written in C language and provides a component-based programming environment where the application program is made up of a collection of several concurrent asynchronous threads called schemas. Each thread is dynamically loaded into the application. This framework uses these schemas as the building block of robot applications. They are combined in dynamic hierarchies to unfold behaviors. We've developed drivers to support the USB servo controllers, CMOS Image sensor boards, IMU and depth sensor (pressure sensor).

The control system for the Isaac Peral y Caballero combines a simple and effective finite state machine on top of classic Proportional Integral Derivative controllers. It has a multi-layered architecture, where top layers use feedback provided by lower layers, so the AUV can take decisions based on the environment.

In order to achieve a robust image processing algorithm, we need to make it chrome nondependent and very lightweight in order to achieve real-time. Therefore we decided to avoid the usage of Hough transforms in favour of LDC, which stands for Line Detection algorithm using Contours.

In LDC we pre-process the input image with normalization, Gaussian smooth, Laplace edge detection and thresholding. These steps produce a binary image representing the borders from the image. We then extract the consecutive boundary pixels from each component that create the contours of the image. Contours are then divided into short segments, which are classified by their orientation, into 9 discrete categories. Finally, line segments are detected by finding consecutive sequences of segments of similar slope.

We further improved LDC by eliminating duplicated lines. Later, the info provided by the LDC algorithm is combined with Kalman filters that allow an efficient estimation of the position of the detected lines even under conditions of occlusion. Thus, combining these results with simple colour filters and feature detectors allows us to have a high level of certainty in the detection of objects.

 

For bins recognition we implemented BRISK keypoint detector and descriptor extractor. It’s two times faster than SURF and offers better results on scale and image rotation variability. The algorithm is finally combined with a  

The AUV has a component developed using JDErobot that allows us to plan each task down to the smallest detail. This component is called visualHFSM and it permits us to visually delineate finite state machines, allowing us to describe the behaviour of the vehicle with precision. VisualHFSM generates XML code that can be easily modified to archive missions which can later be retrieved and/or modified in the future.

Before testing our algorithms in the AUV, we use Gazebo simulator to study the behaviour of our submarine in a virtual world. Gazebo is a multi-robot simulator for outdoor environments, capable of simulating a population of robots, sensors and objects in a three-dimensional world. It generates both realistic sensor feedback and physically plausible interactions between objects. Although Gazebo is not originally designed for underwater environments, it provides valuable information from the sensors, and allows us to make proofs of concepts from every algorithm implemented. Finally, the submarine has a semiautomatic mode where it is controlled by a driver, via a ethernet link, converting it into an ROV (Remotely Operated Vehicle). It is then, through the use of a GUI, that the driver can control the submarine and visualize the status of the systems, the images from the cameras, and all the other sensors, making it possible to navigate in a controlled manner by means of a keyboard.

bottom of page