Last modified: 23 Jul 2024 10:43
The aim of the course is to give an overview of the different approaches to specify motions of industrial and mobile robots, up to autonomous robots that learn from their experiences. The course introduces students to the fundamentals of machine learning, which are relevant for robotics research and practice.
Study Type | Postgraduate | Level | 5 |
---|---|---|---|
Term | Second Term | Credit Points | 15 credits (7.5 ECTS credits) |
Campus | Aberdeen | Sustained Study | No |
Co-ordinators |
|
The course spans from manual teach-pendent programming of industrial manipulators to reinforcement learning for autonomous robots. It covers the path planning problem and its (un-)informed graph search solutions as well as the machine learning problem and the concept of overfitting. The course begins with a classification of programming approaches for industrial robots. Later, the characteristics of maximum likelihood and maximum a posteriori estimation are compared just as those of (Gaussian) naïve Bayes and logistic regression classifiers. An overview on the units of multi-layer neural networks and backpropagation is given. Finally, the differences and similarities between value-based, policy-based and actor-critic reinforcement learning techniques are conveyed.
Course Content
Information on contact teaching time is available from the course guide.
Assessment Type | Summative | Weighting | 70 | |
---|---|---|---|---|
Assessment Weeks | 40,41 | Feedback Weeks | 42,43,44 | |
Feedback |
On-Campus Invigilated Open Book Exam. Feedback by appointment with course coordinator. |
Knowledge Level | Thinking Skill | Outcome |
---|---|---|
Conceptual | Analyse | Select (and apply) appropriate robot programming methods and analyse code in a robot programming language |
Conceptual | Evaluate | Discuss the concepts of naive Bayes, Gaussian naive Bayes and logistic regression classifiers |
Procedural | Analyse | Derive maximum likelihood and a posteriori estimators as well as apply (Gaussian) naive Bayes |
Procedural | Analyse | Differentiate between the components of (un)informed graph-search techniques and Markov Decision Processes as well as the concepts of Utility and Policy |
Procedural | Apply | Apply dynamic programming techniques on Markov Decision Problems including the computation of value functions and optimal policies |
Procedural | Apply | Use and train decision trees including the basic operations for data (pre)processing |
Procedural | Apply | Apply and implement graph-search techniques for path planning including the computation of roadmaps |
Procedural | Apply | Implement functions with multi/layer neural networks and derive gradients for their training |
Procedural | Evaluate | Assess reinforcement learning solutions to toy text, classic control and robotics problems |
Assessment Type | Summative | Weighting | 15 | |
---|---|---|---|---|
Assessment Weeks | 28,31,34,35 | Feedback Weeks | ||
Feedback |
Lab notebooks are returned with comments after the lab sessions. General feedback to all class is provided during and after each session. One-to-one discussion with student where requested or appropriate. |
Knowledge Level | Thinking Skill | Outcome |
---|---|---|
Procedural | Analyse | Derive maximum likelihood and a posteriori estimators as well as apply (Gaussian) naive Bayes |
Procedural | Apply | Use and train decision trees including the basic operations for data (pre)processing |
Procedural | Apply | Apply dynamic programming techniques on Markov Decision Problems including the computation of value functions and optimal policies |
Procedural | Apply | Apply and implement graph-search techniques for path planning including the computation of roadmaps |
Procedural | Evaluate | Assess reinforcement learning solutions to toy text, classic control and robotics problems |
Assessment Type | Summative | Weighting | 15 | |
---|---|---|---|---|
Assessment Weeks | 33 | Feedback Weeks | 34,35,36 | |
Feedback |
Course assignments will be marked in a timely fashion and detailed feedback will be given. |
Knowledge Level | Thinking Skill | Outcome |
---|---|---|
Procedural | Analyse | Derive maximum likelihood and a posteriori estimators as well as apply (Gaussian) naive Bayes |
Procedural | Apply | Implement functions with multi/layer neural networks and derive gradients for their training |
Procedural | Apply | Apply and implement graph-search techniques for path planning including the computation of roadmaps |
Procedural | Apply | Use and train decision trees including the basic operations for data (pre)processing |
There are no assessments for this course.
Assessment Type | Summative | Weighting | 100 | |
---|---|---|---|---|
Assessment Weeks | 48,49 | Feedback Weeks | 50,51,52 | |
Feedback |
By appointment with course coordinator. |
Knowledge Level | Thinking Skill | Outcome |
---|---|---|
|
Knowledge Level | Thinking Skill | Outcome |
---|---|---|
Procedural | Apply | Implement functions with multi/layer neural networks and derive gradients for their training |
Procedural | Analyse | Differentiate between the components of (un)informed graph-search techniques and Markov Decision Processes as well as the concepts of Utility and Policy |
Procedural | Evaluate | Assess reinforcement learning solutions to toy text, classic control and robotics problems |
Procedural | Apply | Apply dynamic programming techniques on Markov Decision Problems including the computation of value functions and optimal policies |
Procedural | Apply | Use and train decision trees including the basic operations for data (pre)processing |
Procedural | Analyse | Derive maximum likelihood and a posteriori estimators as well as apply (Gaussian) naive Bayes |
Conceptual | Analyse | Select (and apply) appropriate robot programming methods and analyse code in a robot programming language |
Procedural | Apply | Apply and implement graph-search techniques for path planning including the computation of roadmaps |
Conceptual | Evaluate | Discuss the concepts of naive Bayes, Gaussian naive Bayes and logistic regression classifiers |
We have detected that you are have compatibility mode enabled or are using an old version of Internet Explorer. You either need to switch off compatibility mode for this site or upgrade your browser.