Project Topics

www.seminarsonly.com

Engineering Projects

Published on Nov 30, 2023

Abstract

The goal of this project is to track a small flying robot (10g) while it is freely flying in 6x7m experimentation room called “holodeck”. This room is equipped with eight projectors hanging from the ceiling and allowing to simulate a virtual reality on the walls of the room . This room also contains a network camera with a fisheye lens that provides an hemispherical view of the entire room. Such a visual tracking is highly desirable for trajectory reconstruction and analysis to measure the behaviors of the robot in this environment.

By comparing actuator commands to the obtained trajectory, it will also be helpful to get the parameters of a flight dynamic model to allow realistic simulation. Images are recorded using a frame rate of 15 fps. An image differentiation is then applied on two consecutive images. The obtained blobs correspond to the robot position and the position of the shadow.

The spherical azimuth (phi) and zenith (theta) angles of the blobs in the image can be calculated. These two angles are then transformed in the Cartesian coordinate system of the room. The earlier approach as suggested by Julien Reuse used a single camera and utilized the robot shadow on the walls to estimate the 3D position of the flying robot in the experimental room.

To understand the trajectory of the flying robot better, we must localize it in the 3 dimensions of the experimental room. There are many ways by the use of sensors to track an object moving in a given environment. We can put onboard sensors on the moving object or use external sensors to localize the object.

As the flying robot we are trying to localize is very light in weight (~5.2 grams) [1], onboard localizing devices or sensors of any kind are ruled out for this cause. Consequently, it need to be tracked by means of external systems such as acoustic trackers or vision systems.

The cameras we plan to use have a hemispherical field of view and the scene is projected into the image plane is omnidirectional [illustration 2]. A most difficult part of this localization attempt is the coordinate transformation between the image taken with the camera and the real Cartesian coordinates of the flying robot in the experimental room.

Each camera is based on a network interface and gives a realtime visual intelligence. It has a builtin video motion detector and enables to digitally pantiltzoom either the live field or the recorded images

When an image is recorded through a camera, a 3 dimensional scene is projected onto a 2 dimensional plane (the film or a light sensitive photo sensitive array). Thus, we lose a degree of freedom and it will be interesting to retrieve the position of the object in a XYZ space. The interpretation of 3D scenes from 2D images is not a trivial task. However, there exist different possibilities like the use of stereo imaging or triangulation methods in which vision can become a powerful tool for environment capturing [20]. Some methods are already well known and can be resumed

Project Done by Kaushik