FlowChart

architecture-1

1. Master Slave Concept

  • We use this technique to improve performance of Raspberry Pi.

Abstract

  • Master is Laptop.
  • Slave is a Robot.
    Operation and Modules that run in each of the side is listed below
    Master Node functionalities
Expand Master Table
Master Function
ORB Slam Module Simultaneous localisation and Mapping using single camera
Object Detection Module Detect pretrained objects and take necessary actions in Industrial environment
Odometry Module While autonomous mode is not used then position of robot and its control is performed using this module
Trajectory and Map saving module Map is used for taking shortest path using A* algorithm and avoid objects


Slave Node functionalities

Expand Slave Table
Slave Function
Manage Module This creates memory for each of the module and all runs within it with a sampling frequency of 20HZ
Autonomous mode and Manual mode Calling localwebcontroller part based on Global variables,flag values will switch between modes.
Web controller module Using Tornado Web API anyone can contol the robot with the help of browser
Actuator module Sends pwm signals to PCA board with the help of I2C protocol
ROS module Robot operating system helps in efficient transfer of sensor messages between master and slave

2. Object Detection Module

  • We used MobileNet SSD transfer learning algorithm for Object detection.Some pretrained objects can be recognised and the label is sent to slave after processing.

Reason

Single Shot object detection or SSD takes one single shot to sense multiple objects within the image.It comprises of two parts

  • Extract feature maps
  • Apply convolution filter to detect objects

  • MobileNet is preferred over Resnet or VGG or alexnet as they have a huge network size and it escalates the number of computation however in Mobilenet there is a simple architecture comprising of a 3 × 3 depth wise convolution followed by a 1 × 1 pointwise convolution.

  • After running all the nodes using rosrun <package> <pythonfile> or roslaunch run the below command to visualise the graph as shown below.
rqt graph

architecture-2

  • All the labels are published by Objectdetection ros node.
  • Using image pipeine package we can remap the topic through which the images are recieved from slave.
  • For fast training of objects to recognise we have used Haar cascade detection.
  • Positive and negative images collected using camera are used for train the model using Cascade GUI Trainer. It is a great software for training a model.
  • An xml file generated after stage training can be used to detect those particular objects in realtime.

3. SLAM Module and VISO(Visual Odometry)

architecture-3

  • Use this Link to know more about the Odometry package.
  • Our aim was to use single camera for localistaion.Hence VISO gives accurate x,y,z coordinates of robot.
  • With the help of ORB-SLAM-2 ,monocular camera , map construction is possible as shown above that helps in avoiding obstacles.
  • Pangolian viewer is used for visualisation of SLAM.

4. Autonomous Driving Module

Below is the Block Diagram

block-1

Expand each sections for more details

RC car

Radio controlled(RC) car consists of Electronic speed controller(ESC),PCA9685 throttle and steering control driver and Steering servo. PCA9685 driver will be interfaced with raspberry pi using I2C serial communication protocol.Based on the algorithm setting and the corresponding output (steering and throttle) from the raspberry pi is given to PCA9685 which controls steering angle of front wheels and throttle of the RC car.

Raspberry pi 3b

A Microprocessor which output digital pulses or signals from its GPIO(General purpose Input output) pins based on the algorithm setting. Raspberry pi camera is also interfaced with this processor for image processing. The GUI (Graphical user interface) or basic terminal of raspbian os installed on raspberry pi can be viewed on a computer using ssh(Secure shell) connection with it.

Camera

Raspberry pi wide camera is used for capturing of lane images and processing the same using Open CV library. For processing Raspberry pi processor is used.

Algorithm setting

This is used to switch between Behavioural cloning and Reinforcement learning algorithm. Also, using manual driving we will collect the images of the track or lane while driving, to train the Deep learning model which will eventually predict the steering and throttle outputs when in RC car is in autonomous mode.

Google colab for Training

The collected data when in manual mode is transferred to Google colab for training the model using hyperparameters. The collected data is called TUB which is an collection of images and grouped them together by batches for batch training. Local machine doesn’t train the model accurately and also the training process is slow,so we use Google cloud platform which has powerful GPU’s and CPU’s.Also Jupyter notebook in google colab has many built in libraries , auto completion features which helps us perform easy coding using Python. rsync is a tool in linux used to transfer h5 models i.e. trained model from local machine to raspberry pi 3b.