Lioulemes Alexandros

Department of Computer Science and Engineering - University of Texas at Arlington


The goal of this study is to propose a motion analysis system that improves the readings of the depth camera, through the use of a kinematic model that describes the motion of the human arm. In our current experimental set-up we are using the Kinect v2 to capture a participant who performs rehabilitation exercises with the Barrett WAM robotic manipulator.

In this project, called MAGNI-3D, we combine the robot-assisted rehabilitation with a 3D video game, which motivates the user through a graphical user interface. This system is operated by therapists and allows them to interact in real-time with the patients. We propose a game that will be played by a patient who has suffered an injury to their arm (e.g. Stroke, Spinal Injury, or some physical injury to the shoulder itself). In this work, we evaluate the user exercises according to the prescribed regimen assigned by the therapist, using the Barrett Whole Arm Manipulator (WAM) Robotic Arm in order to capture the user’s upper limb range of motion. The results from participants and the surveys confirm that our system can be used in a clinical environment to improve the communication and interaction of therapist and patient in robot-aided therapy.

The University of Texas at Arlington, and especially the HERACLEIA Human-centered Computing Laboratory present an new way for physical human-robot interaction. In the MAGNI system, we use the advance capabilities of the Barrett Arm to help the patient complete a rehabilitation session through the Robot-Game Interaction program by providing entertainment incentives. MAGNI is a self-managed, game-based human-robot tele-rehabilitation system that can be operated without a continuous supervision by the therapist. For each patient, there are specific types of motion associated with their rehabilitation regimen. Data collected include, the user's motion over time, game performance, changes, reaction time or delays and overall score.

In this study we explore a simple scenario where the robot can be used to provide service to users while acquiring close-up footage. In RoboCoffee system, a robot used to order coffee via a mobile application and by employing facial and clothes features we manage to return the mug with coffee in the user who previously ordered it.

In the Motion Capture (MoCap) Laboratory we have used the Vicon system of 16 infrared camera to obtain real time information for the AR.Drone and wand position. In both objects have been installed reflector markersto track their position with millimeter accuracy and we visualize and capture their position.

The AR.Drone navigates autonomously in the corridors and avoids pedestrians and walls in order to reach to the target position. The only sensor in our system is the front camera. For the navigation part our system rely on the vanishing point algorithm, the Hough transform for the wall detection and avoidance, and the HOG descriptors for pedestrian detection by using SVM classifier.

In this study a robot is able to detect faces standing in front of it and track them. For the face detection we use the Viola and Jones method and for the head tracking we call the Mean-Shift OpenCV function. To overpass the occlusions from people's movements, we used the coordinates of the Kinect's skeleton model to identify the persons' id.

The goal of the implemented procedure is to guide the robot to a target position, represented by an image, starting from an arbitrary position which has the initial image in its field of view. We derive and decompose homography in order to extract the rotation and translation vectors. Thus, the P3-DX platform use these vectors in order to reach in the target position.

PeopleBot platform tracks a tennis ball by using the Mean-Shift Algorithm which is an efficient approach to track objects whose appearance is defined by histograms. Then, by calculating the distance of the object from the center of the video plane we manage to track and follow the selected object.

StereoBot tracks an object using the relative depth of the stereo theory. The visual controller handles the relative depth information in order to move the platform foward and backward.