What is Motion Planning?

Primer 12 min read

Motion planning asks a deceptively simple question: given a robot, a world, a place the robot is, and a place we want it to be, what should the robot do? The remainder of this book is a sixteen-chapter answer. Before any algorithm appears, this opening chapter nails down the vocabulary, the problem statement, and the map of the field, so that every later chapter can refer back to a shared foundation.

1. Five things that might be called “a plan”

When roboticists say “plan” they mean one of at least five different mathematical objects. Confusing them is the single most common source of early-student bewilderment, so we disambiguate first.

Let qCq \in \mathcal{C} denote a configuration — a point in the robot’s configuration space C\mathcal{C}, the set of all possible poses the robot can take. Let xXx \in \mathcal{X} denote a state — a point in the state space X\mathcal{X}, which augments the configuration with its time derivatives (velocities) when dynamics matter. Let uUu \in \mathcal{U} denote a control input — a point in the control space U\mathcal{U}, the set of commands the robot’s actuators can accept. Let oOo \in \mathcal{O} denote an observation — a point in the observation space O\mathcal{O}, the sensor reading the robot sees instead of the true state when perception is incomplete.

ObjectTypeExample
Pathτ:[0,1]C\tau: [0,1] \to \mathcal{C}”Move your joints through this sequence of configurations.”
Trajectoryτ:[0,T]C\tau: [0, T] \to \mathcal{C} or X\mathcal{X}Path plus a time parameterisation: “be here at time 2.5 s.”
Policyπ:XU\pi: \mathcal{X} \to \mathcal{U} (or OU\mathcal{O} \to \mathcal{U})A closed-loop rule: “whatever state you find yourself in, apply this control.”
Controllerκ:X×XrefU\kappa: \mathcal{X} \times \mathcal{X}_{\text{ref}} \to \mathcal{U}A tracking rule: given a reference trajectory and the current state, produce actuator commands that close the loop.
Task plansequence of symbolic operators”Pick mug; move to shelf; place mug.” No geometry.
A motion plan is not a trajectory

A “motion plan” colloquially means whichever of these objects the planner produces. An RRT produces a path. A trajectory optimiser produces a trajectory. An MPC loop produces a sequence of short trajectories, each executed open-loop for one timestep, which is effectively a policy. When reading a paper, always ask: which object is this actually computing?

The objects form a rough hierarchy of commitment. A path commits to geometry but not timing, so it can be re-parameterised later to respect velocity limits. A trajectory commits to timing. A policy refuses to commit to anything until it sees the current state — it is the most robust to disturbances but also the most expensive to compute, because it must be defined everywhere. A controller is the online machinery that actually closes the loop between a reference (path or trajectory) and the hardware, and is rarely computed by a planner; it is designed once for a robot and reused across many plans.

Example. Consider a Franka Panda arm asked to pick up a mug.

Each box in this stack has its own literature, its own failure modes, and its own home in this book.

2. The canonical planning query

Almost every planner in this book solves some version of the following problem.

Problem — the motion-planning query

Given:

  1. A configuration space C\mathcal{C} and a free subset CfreeC\mathcal{C}_{\text{free}} \subseteq \mathcal{C} defined by an environment model;
  2. A start configuration qsCfreeq_s \in \mathcal{C}_{\text{free}};
  3. A goal set GCfree\mathcal{G} \subseteq \mathcal{C}_{\text{free}} (often a single point qgq_g);
  4. A cost functional J:ΓR0J: \Gamma \to \mathbb{R}_{\ge 0} on the space Γ\Gamma of candidate paths;
  5. Optionally, differential constraints q˙Ω(q)\dot q \in \Omega(q) or x˙=f(x,u)\dot x = f(x, u);

find a path τ:[0,1]Cfree\tau: [0,1] \to \mathcal{C}_{\text{free}} with τ(0)=qs\tau(0) = q_s, τ(1)G\tau(1) \in \mathcal{G}, satisfying the differential constraints, and ideally minimising J(τ)J(\tau).

The 5-tuple (Cfree,qs,G,J,constraints)(\mathcal{C}_{\text{free}}, q_s, \mathcal{G}, J, \text{constraints}) is the contract a motion planner signs. Variants abound:

A large part of this book is spent relaxing or strengthening components of this tuple and watching the algorithm change in response.

3. Four properties a planner can have

Every planner in the next fifteen chapters can be classified by which subset of four properties it delivers. No planner delivers all four; knowing which two or three a method provides is how you pick the right tool.

3.1 Completeness

A planner is complete if it returns a solution whenever one exists, and reports failure otherwise in finite time. This is the strong form — rare in practice.

A planner is resolution complete if it is complete up to a discretisation: it finds a solution whenever one exists at a given resolution, and refining the resolution eventually finds any solution with positive clearance. Grid-based A* is resolution complete.

A planner is probabilistically complete if, given more and more random samples, the probability of returning a solution (when one exists) approaches 1. RRT and PRM are probabilistically complete.

3.2 Optimality

A planner is optimal if it returns a minimum-cost solution. Dijkstra on a finite graph is exactly optimal. A planner is asymptotically optimal if the cost of its returned solution converges to the optimum as the computational budget grows; RRT*, PRM*, and BIT* (developed in optimal sampling-based planning) are asymptotically optimal. Anytime planners additionally return the best-so-far solution at any interruption point.

3.3 Real-time guarantees

Some planners can be stopped at any time and will return a valid (if suboptimal) plan; others must run to completion. Anytime algorithms give the former. Reactive controllers such as artificial potential fields and the dynamic window approach — covered in reactive and local planning — give extreme real-time behaviour at the cost of global reasoning.

3.4 Robustness

Does the plan still work if the model was slightly wrong? If the robot state is noisy? Classical planners produce plans that are valid in expectation of a nominal model; robust planners account explicitly for bounded uncertainty; stochastic planners optimise expected cost under a distribution. Planning under uncertainty explores this axis in depth.

No free lunch

You cannot have a planner that is simultaneously complete, optimal, real-time, and robust, unless the problem is trivial. Each practical method sacrifices at least one of the four. A pragmatic robotics engineer picks which guarantees matter for their domain and deliberately lets the others go.

Here is a small cheat-sheet, which the remainder of the book will gradually populate. The method names are unlinked in the table to keep it readable; each is introduced and linked in its home chapter.

MethodCompleteOptimalReal-timeRobust
Dijkstra on a finite graph✓ (exact)
A* on a gridresolution
Artificial potential fields
PRMprobabilistic
RRTprobabilistic
RRT* / BIT*probabilisticasymptoticanytime
CHOMP / TrajOpt✗ (local)local
MPC (model predictive control)localpartially

The table deliberately contains red ink. A real robot stack blends several rows.

4. Three algorithmic families (plus one)

Almost every planner ever written lives in one of these boxes:

Search-based planning. Build an explicit or implicit graph over discrete states; run a shortest-path algorithm — A*, Dijkstra, D*, ARA*, all developed in discrete planning. The classical “discrete” approach. Works when the graph is small enough to enumerate or when good heuristics prune the search.

Sampling-based planning. Sample configurations from C\mathcal{C}, connect them with a local planner, and build an increasingly rich graph (PRM) or tree (RRT), with asymptotically-optimal variants (FMT*, BIT*) covered in optimal sampling-based planning. Works in high dimensions where a full grid is intractable.

Optimisation-based planning. Parameterise a trajectory as a decision variable in a nonlinear programme; minimise a smoothness + collision objective. Methods such as CHOMP, STOMP, TrajOpt, and GCS are developed in trajectory optimisation. Gives smooth, locally optimal paths, but depends heavily on initialisation.

Reactive and local planning sits slightly outside this taxonomy. Artificial potential fields, the dynamic window approach, and velocity obstacles — covered in reactive and local planning — do not plan in the global sense; they react to a local cost landscape. They are essential as the fast inner loop of hybrid systems but cannot solve hard connectivity problems on their own.

A practical modern system typically uses at least two families together: a global sampling-based or search-based planner for topology, a local optimiser for smoothness, and a reactive controller for execution under disturbances. The chapter on software, benchmarks, and reproducibility surveys how OMPL, MoveIt, and Drake arrange these layers in practice.

Reading any planning paper

When you meet a new method, ask: (1) which of the five objects is it computing? (2) what’s its 5-tuple contract? (3) which of the four properties does it claim? (4) which of the three families does it live in? The answers should be in the first two pages of any well-written paper; if they aren’t, read with suspicion.

5. A brief tour of what is not in this book

To keep the scope finite, the book deliberately omits several adjacent topics:

For each of these, pointers to the right external resource are given in the references.

6. The hardness of motion planning

It would be dishonest to finish this chapter without admitting that the general problem is hard.

Theorem — Reif, 1979

The problem of deciding whether a piecewise-linear robot with dd degrees of freedom, moving among polyhedral obstacles in R3\mathbb{R}^3, can reach a goal configuration is PSPACE-hard in dd [1].

PSPACE-hard means that, unless P=PSPACE\text{P} = \text{PSPACE}, no polynomial-time algorithm exists. The practical consequence is:

Complete planning does not scale

Any planner that claims to be exactly complete in high dimensions is either very slow, restricted to a special class of robots, or relying on exponential blow-ups hidden in its constants. This is why the rest of the book relies on approximations: grid discretisations (resolution complete), random sampling (probabilistically complete), or local optimisation (no completeness at all, but fast).

The intellectual arc of the next fifteen chapters is precisely the engineering compromise this theorem forces: we trade exactness for scalability, and try to recover as many guarantees as we can along the way.

7. Notation

The following symbols are used consistently throughout the book. Later chapters introduce additional notation as needed.

SymbolMeaning
qqConfiguration (point in C\mathcal{C})
C\mathcal{C}Configuration space
Cfree\mathcal{C}_{\text{free}}, Cobs\mathcal{C}_{\text{obs}}Free and obstacle subsets of C\mathcal{C}
xxState (configuration together with velocity)
X\mathcal{X}State space
uUu \in \mathcal{U}Control input
τ\tauPath or trajectory (context-dependent)
π\piPolicy
qs,qgq_s, q_gStart and goal configurations
G\mathcal{G}Goal set
J(τ)J(\tau)Cost functional on paths/trajectories
d(q1,q2)d(q_1, q_2)Distance metric on C\mathcal{C}
f(x,u)f(x, u)Dynamics function x˙=f(x,u)\dot x = f(x, u)
η\etaStep size (context-dependent)
n,Nn, NNumber of samples, tree/roadmap size

8. Worked example: a Roomba, a warehouse robot, and a surgical needle

To cement the 5-tuple and the family taxonomy, we identify them explicitly for three canonical robots.

8.1 Roomba (autonomous vacuum)

8.2 Warehouse fleet (100 robots moving pallets)

8.3 Steerable needle for prostate brachytherapy

Each example is revisited at least once in later chapters; together they span the three hardest dimensions of the field — high DoF, many agents, and uncertainty.

References

  1. 1.
Cite this note
@online{zandonati2026motion,
  author    = {Ben Zandonati},
  title     = {What is Motion Planning?},
  year      = {2026},
  month     = {apr},
  url       = {https://benzandonati.co.uk/notes/mp_notes/mp-00-what-motion-planning-is/},
  urldate   = {2026-05-10},
  note      = {Online; accessed 2026-05-10},
  keywords  = {motion-planning, robotics, introduction, taxonomy},
}

Ben Zandonati (2026). What is Motion Planning?. Retrieved from https://benzandonati.co.uk/notes/mp_notes/mp-00-what-motion-planning-is/

Suggest an edit