Planning & Learning in Robotics: Course Projects
These three projects, completed as part of the Planning and Learning in Robotics course, focus on solving motion planning and control challenges across various environments. The first project explores optimal policy determination in a Door & Key environment using the value iteration algorithm, demonstrating the algorithm’s effectiveness in computing optimal policies for different maps. The second project tackles 3D motion planning by implementing and comparing search-based and sampling-based algorithms like A*, RRT, and RRT* to find collision-free paths in different environments. Lastly, the third project addresses 2D trajectory tracking using stochastic optimal control, implementing Certainty Equivalent Control (CEC) and Generalized Policy Iteration (GPI), with CEC outperforming GPI in terms of accuracy and efficiency. These projects highlight advanced techniques for solving real-world robotics challenges.
Project 1: Dynamic Programming
Project 2: Motion Planning
Project 3: Infinite-Horizon Stochastic Optimal Control