Abstract
In this project, we simulate a hostile vehicle target with 2-dimensional continuous state which transitions from one discrete time-step to the next according to a known probabilistic model. The decision-making agent is described by a discrete 2-dimensional position which transitions state deterministically according to a discrete action model.
The partially observable nature of the target is captured by a belief state over possible states. The probability model for the observations is defined as a function of the belief-state and deterministic agent state. The agent is only allowed to shoot at the target one time. A reward function was defined to capture the need to eliminate the target quickly, but also the need to establish a good enough estimate of the position of the target before shooting, lest the target survives.
This project implements a Monte-Carlo Real-Time Belief Space Search method for online solution of the presented POMDP in the Julia programming language.
Report
The fine details of our implementation can be read in the accompanying report from the Stanford AA228/CS238 course.
Download Report
Authors and Contributors
This project was completed as a part of Stanford's AA228/CS238 Decision Making Under Uncertainty, Fall Quarter 2014.
The team members are:
*Kyle Reinke (M.S. Aero/Astro Engineering)
*Joel Pazhayampallil (M.S. Mechanical Engineering)
This solution method was an adaptation of the MC-RTBSS algorithm presented by Travis B. Wolf and Mykel J. Kochenderfer.
Contact
Please refer to the Team Members side bar above to access our LinkedIn profiles. Thank you!
This page was generated by GitHub Pages using the Architect theme by Jason Long.