This repository contains the solutions from all the programming assignements and quiz in this Coursera Course.
How can robots perceive the world and their own movements so that they accomplish navigation and manipulation tasks? In this module, we will study how images and videos acquired by cameras mounted on robots are transformed into representations like features and optical flow. Such 2D representations allow us then to extract 3D information about where the camera is and in which direction the robot moves. You will come to understand how grasping objects is facilitated by the computation of 3D posing of objects and navigation can be accomplished by visual odometry and landmark-based localization. Mathematical prerequisites: Students taking this course are expected to have some familiarity with linear algebra, single variable calculus, and differential equations. Some experience programming with MATLAB or Octave is recommended (we will use MATLAB in this course.) MATLAB will require the use of a 64-bit computer. You need to have Matlab installed if you want to run the programs on your machine with the appropriate libraries installed. The data used specifically for this course are not included but any similar data should work fine. This project is licensed under the MIT License - see the LICENSE.md file for detailspgeedh/RoboticsSpecialization-UPenn-Perception
This Repository contains projects from Robotics specialization- Perception from Coursera offered by the University of Pennsylvania- Instructor: Prof. Kostas Daniilidis
MATLABMIT