Roboter

Robot Kinematics

Robot kinematics is the mathematical foundation of every robot in the world — from a simple two-joint planar arm to a 7-DOF collaborative robot performing sub-millimetre surgery. It answers two fundamental questions: given joint angles, where is the end-effector in space (forward kinematics)? And given a desired position in space, what joint angles produce it (inverse kinematics)? This guide takes you from the geometry of a single joint through homogeneous transforms, Denavit-Hartenberg parameters, Jacobian velocity control, singularities, and trajectory planning — from absolute beginner to engineering-grade understanding.

Topics
Foundations→Trajectory→Safety
Simulators
8 interactive tools
Brands
20+ real references
Quiz
10 MCQ + scoring

📺 Video Lesson

📐 Foundations: Geometry, Frames & Transforms

Before a robot can move, we must establish a mathematical language for describing positions and orientations in 3D space. This is not abstract mathematics — every number in a robot program ultimately refers to a coordinate frame, a position vector, or a rotation matrix. Understanding these foundations makes every robot concept that follows completely logical rather than mysterious.

📐 Coordinate Frame Explorer — Rotation & Translation

Drag the sliders to move and rotate the child frame (orange) relative to the world frame (white). See how the same point looks different in each frame — this is the core of all robot kinematics.

80 mm
60 mm
35°
60 mm
30 mm
Point in world X
Point in world Y
Transform (matrix)
T(x,y,θ)
Inverse: world→child
T⁻¹
🔄 3D Rotation: Rx · Ry · Rz — Euler Angle Composition

Apply Roll, Pitch, Yaw rotations to the unit vectors. Observe how the order of rotations matters — this is non-commutativity. The coloured axes always remain orthogonal (rotation matrix property).

Coordinate Frames and the Right-Hand Rule

A coordinate frame (or reference frame) is a set of three mutually perpendicular axes — X, Y, and Z — anchored to a specific point in space (the origin). Every measurement in robotics is made relative to a frame. The right-hand rule: point your right-hand fingers along X, curl them toward Y — your thumb points along Z. This is universal across all robot manufacturers. In industrial robotics, the world frame (or base frame) is fixed to the robot base. The tool frame is fixed to the end-effector. The work object frame is fixed to the part being processed. A robot's job is to move the tool frame to desired positions and orientations relative to the work object frame — everything else is mathematics to accomplish this.

Foundations: Geometry, Frames & Transforms
// Coordinate frame fundamentals
//
// A frame is defined by:
//   - An origin point (x, y, z position in some parent frame)
//   - Three unit vectors (x-axis, y-axis, z-axis directions)
//
// Representation as 4×4 homogeneous transformation matrix:
//
//     [ R  | p ]   R = 3×3 rotation matrix (orientation)
// T = [----+---]   p = 3×1 position vector
//     [ 0  | 1 ]   0 = 1×3 zero row
//
// Example: frame rotated 30° around Z, translated [100, 50, 200] mm
//
//     [ cos30  -sin30  0  | 100 ]   [ 0.866  -0.500  0  | 100 ]
// T = [ sin30   cos30  0  |  50 ] = [ 0.500   0.866  0  |  50 ]
//     [   0       0    1  | 200 ]   [   0       0    1  | 200 ]
//     [   0       0    0  |   1 ]   [   0       0    0  |    1]
//
// To find a point P expressed in frame A:
//   P_world = T_world_A × P_A
//
// To chain multiple frames:
//   T_world_tool = T_world_base × T_base_link1 × T_link1_link2 × ... × T_flange_tool
//
// Key identity: T_A_B × T_B_A = Identity  (inverse transform)
// T_B_A = T_A_B^(-1)
// For rotation-only: T^(-1) = T^T (transpose)
// For general: T^(-1) = [R^T | -R^T·p]
//                        [ 0  |    1   ]

Rotation Matrices: What They Are and How to Use Them

A rotation matrix R is a 3×3 matrix that rotates a vector from one frame into another. It has three beautiful properties: (1) every column is a unit vector (|col| = 1), (2) all columns are orthogonal to each other (colᵢ · colⱼ = 0 for i≠j), (3) det(R) = +1 (not -1, which would be a reflection). These properties mean R has only 3 independent parameters — the 3 degrees of freedom of rotation — even though it contains 9 numbers. The three elementary rotations around X, Y, and Z axes are the building blocks. Any orientation in space can be reached by composing these three. The order of composition matters: rotations are not commutative (R₁R₂ ≠ R₂R₁ in general). This surprises beginners but becomes intuitive with practice.

Foundations: Geometry, Frames & Transforms
// Elementary rotation matrices
import numpy as np

def Rx(theta_deg):
    """Rotation by theta degrees around X axis"""
    t = np.radians(theta_deg)
    return np.array([
        [1,          0,           0],
        [0,  np.cos(t), -np.sin(t)],
        [0,  np.sin(t),  np.cos(t)]
    ])

def Ry(theta_deg):
    """Rotation by theta degrees around Y axis"""
    t = np.radians(theta_deg)
    return np.array([
        [ np.cos(t), 0, np.sin(t)],
        [         0, 1,         0],
        [-np.sin(t), 0, np.cos(t)]
    ])

def Rz(theta_deg):
    """Rotation by theta degrees around Z axis"""
    t = np.radians(theta_deg)
    return np.array([
        [np.cos(t), -np.sin(t), 0],
        [np.sin(t),  np.cos(t), 0],
        [        0,          0, 1]
    ])

# Example: ZYX Euler angles (Roll-Pitch-Yaw)
# Roll=10°, Pitch=20°, Yaw=30°
# Applied in order: first Z (yaw), then Y (pitch), then X (roll)
R_total = Rz(30) @ Ry(20) @ Rx(10)
print("Combined rotation matrix:")
print(np.round(R_total, 4))

# To rotate a vector:
v = np.array([1, 0, 0])  # X-axis unit vector
v_rotated = R_total @ v
print(f"Rotated vector: {np.round(v_rotated, 4)}")

# Verify orthonormality:
print(f"det(R) = {np.linalg.det(R_total):.6f}")  # Must be +1.0
print(f"R @ R.T = I? {np.allclose(R_total @ R_total.T, np.eye(3))}")

Homogeneous Transforms: The Universal Robot Language

The 4×4 homogeneous transformation matrix is the single most important mathematical object in robotics. It combines rotation (3×3 top-left) and translation (3×1 top-right) into one matrix, enabling a single matrix multiplication to both rotate and translate a point simultaneously. This elegance is why it is universally used. Chaining transforms: to express frame C relative to frame A, knowing frame B relative to A and frame C relative to B: T_A_C = T_A_B × T_B_C. This chain is the entire mathematical basis of forward kinematics — each joint adds one transform, and multiplying them all together gives the end-effector position and orientation relative to the base. The inverse transform lets you go the other direction: T_B_A = T_A_B⁻¹.

Foundations: Geometry, Frames & Transforms
// Homogeneous transforms — complete Python library
import numpy as np

def make_transform(R, p):
    """Build 4x4 T from 3x3 R and 3x1 position p"""
    T = np.eye(4)
    T[:3, :3] = R
    T[:3,  3] = p
    return T

def T_from_RPY_xyz(roll, pitch, yaw, x, y, z):
    """Build transform from Roll-Pitch-Yaw (degrees) + translation (mm)"""
    R = Rz(yaw) @ Ry(pitch) @ Rx(roll)  # ZYX convention
    return make_transform(R, [x, y, z])

def transform_point(T, point):
    """Transform a 3D point by a 4x4 matrix"""
    p_h = np.array([*point, 1.0])       # Homogeneous coordinates
    return (T @ p_h)[:3]               # Back to 3D

def invert_transform(T):
    """Efficient inverse of a rigid-body transform"""
    R = T[:3, :3]
    p = T[:3,  3]
    T_inv = np.eye(4)
    T_inv[:3, :3] = R.T
    T_inv[:3,  3] = -R.T @ p
    return T_inv

# === REAL EXAMPLE: 2-link robot arm ===
# Robot base at world origin, facing +X
# Joint 1 rotates around Z (vertical axis), θ1 = 45°
# Link 1 length = 300mm along X after rotation
# Joint 2 also rotates around Z (elbow), θ2 = -30°
# Link 2 length = 200mm

L1, L2 = 300, 200
th1, th2 = 45, -30  # degrees

# Transform: base → joint 1
T_base_j1 = make_transform(Rz(th1), [0, 0, 0])

# Transform: joint 1 → joint 2 (link 1 along X)
T_j1_j2   = make_transform(Rz(th2), [L1, 0, 0])

# Transform: joint 2 → end-effector (link 2 along X)
T_j2_ee   = make_transform(np.eye(3), [L2, 0, 0])

# Full forward kinematics
T_ee = T_base_j1 @ T_j1_j2 @ T_j2_ee

print("End-effector position:")
print(f"  X = {T_ee[0,3]:.1f} mm")
print(f"  Y = {T_ee[1,3]:.1f} mm")
print(f"  Z = {T_ee[2,3]:.1f} mm")
print(f"End-effector orientation (rotation matrix):")
print(np.round(T_ee[:3,:3], 3))

🦾 Forward Kinematics — From Joints to Position

Forward kinematics (FK) answers: "given the angle of every joint, where is the end-effector?" It is always solvable — there is exactly one answer for a given set of joint angles. For any serial robot, FK is a chain of transforms, one per joint, multiplied together. The Denavit-Hartenberg convention provides a systematic, unambiguous way to assign coordinate frames and write those transforms. FK is computed hundreds or thousands of times per second inside a robot controller.

🦾 Interactive 3-DOF Planar Robot — Forward Kinematics

Drag the joint angle sliders to move the robot. Watch the end-effector trace a path. Observe how each joint's contribution compounds through the kinematic chain. Try to reach the target point (red dot).

250 mm
180 mm
100 mm
30°
45°
-20°
TCP X
TCP Y
End orientation φ
Reach from base
Max reach
Target error
📋 DH Parameter Table — Standard Industrial Robots
RobotDOFJoint 1 αJoint 2 αReachPayloadIK solutions
PUMA-560 (classic)6-90°650mm2.5 kg8
KUKA KR 6 R7006-90°706mm6 kg8
ABB IRB 12006-90°901mm7 kg8
Universal Robots UR5e690°850mm5 kg16*
FANUC M-20iD6-90°1813mm20 kg8
KUKA LBR iiwa 77-90°90°800mm7 kg∞ (redundant)

* UR robots have non-spherical wrist geometry — more IK solutions possible.

Denavit-Hartenberg Convention — The Standard Method

The DH convention (1955) defines a systematic way to attach a coordinate frame to each link of a robot. Following four rules: (1) Z-axis along the joint axis of rotation/translation. (2) X-axis along the common normal between consecutive Z-axes. (3) Y-axis follows right-hand rule. (4) Origin at intersection of X and Z axes. Once frames are assigned, exactly 4 parameters describe the transform between each consecutive pair: θ (joint angle around Zᵢ₋₁), d (distance along Zᵢ₋₁), a (distance along Xᵢ — the link length), α (angle between Zᵢ₋₁ and Zᵢ around Xᵢ — the link twist). For revolute joints, θ is variable (it is the joint angle). For prismatic joints, d is variable. All other parameters are fixed geometry of the robot.

Forward Kinematics — From Joints to Position
// DH Transform Matrix
// For each joint i with parameters (θ, d, a, α):
//
//         [cos θ  -sin θ·cos α   sin θ·sin α   a·cos θ]
// T_i = [sin θ   cos θ·cos α  -cos θ·sin α   a·sin θ]
//         [  0       sin α          cos α          d  ]
//         [  0          0              0            1  ]
//
import numpy as np

def dh_transform(theta_deg, d, a, alpha_deg):
    """
    Standard Denavit-Hartenberg transform matrix.
    theta_deg: joint angle (degrees) — variable for revolute joints
    d:         link offset (mm)      — variable for prismatic joints
    a:         link length (mm)      — constant
    alpha_deg: link twist (degrees)  — constant
    """
    t = np.radians(theta_deg)
    al = np.radians(alpha_deg)
    ct, st = np.cos(t), np.sin(t)
    ca, sa = np.cos(al), np.sin(al)
    return np.array([
        [ct, -st*ca,  st*sa, a*ct],
        [st,  ct*ca, -ct*sa, a*st],
        [ 0,     sa,     ca,    d],
        [ 0,      0,      0,    1]
    ])

# ─────────────────────────────────────────
# EXAMPLE: PUMA-560 style 6-DOF robot
# DH parameters (a, alpha, d, theta_offset)
# ─────────────────────────────────────────
#       θ(var)  d(mm)   a(mm)   α(°)
dh_params = [
    # Joint 1: base rotation (vertical Z)
    (0,    0,      0,   -90),
    # Joint 2: shoulder (horizontal, link along X)
    (0,    0,    431,     0),
    # Joint 3: elbow
    (0,    0,   -20,    90),
    # Joint 4: wrist roll
    (0,  433,     0,   -90),
    # Joint 5: wrist pitch
    (0,    0,     0,    90),
    # Joint 6: wrist roll (tool)
    (0,    0,     0,     0),
]

def forward_kinematics(joint_angles_deg, dh_params):
    """
    Compute FK for a serial robot.
    joint_angles_deg: list of joint angles in degrees
    dh_params: list of (d, a, alpha) — theta is added from joint_angles
    """
    T = np.eye(4)  # Start at base frame
    for i, (theta_off, d, a, alpha) in enumerate(dh_params):
        theta = joint_angles_deg[i] + theta_off
        T = T @ dh_transform(theta, d, a, alpha)
    return T

# Example: all joints at home position (0°)
home = [0, 0, 0, 0, 0, 0]
T_home = forward_kinematics(home, dh_params)
print(f"Home position TCP:")
print(f"  X={T_home[0,3]:.1f}, Y={T_home[1,3]:.1f}, Z={T_home[2,3]:.1f} mm")

Building FK for a 3-DOF Planar Robot Step by Step

A 3-DOF planar robot (three revolute joints, all Z-axes parallel and pointing up) is the perfect teaching example. All motion stays in the XY plane, so the Z coordinate is always zero and orientation is just a single angle — the sum of all joint angles. Despite this simplicity, it captures every concept of forward kinematics. The end-effector position is the sum of each link's contribution: X = L₁cos(θ₁) + L₂cos(θ₁+θ₂) + L₃cos(θ₁+θ₂+θ₃). Physically: each term is the projection of one link onto the X-axis, taking into account all the rotations that came before it. The angle (θ₁+θ₂+θ₃) is the cumulative rotation — the absolute angle of the last link in the world frame.

Forward Kinematics — From Joints to Position
# Forward kinematics: 3-DOF planar robot
import numpy as np

def fk_3dof_planar(L1, L2, L3, th1_deg, th2_deg, th3_deg):
    """
    Closed-form FK for 3-DOF planar robot.
    All angles in degrees, link lengths in mm.
    Returns (x, y, phi) where phi = end-effector orientation in world frame.
    """
    th1 = np.radians(th1_deg)
    th2 = np.radians(th2_deg)
    th3 = np.radians(th3_deg)

    # Cumulative angles
    a12 = th1 + th2           # absolute angle after joint 2
    a123 = th1 + th2 + th3   # absolute angle after joint 3

    # End-effector position
    x = L1*np.cos(th1) + L2*np.cos(a12) + L3*np.cos(a123)
    y = L1*np.sin(th1) + L2*np.sin(a12) + L3*np.sin(a123)

    # End-effector orientation (angle of last link in world frame)
    phi = np.degrees(a123)

    # Intermediate joint positions (for visualisation)
    j1_x = L1*np.cos(th1)
    j1_y = L1*np.sin(th1)
    j2_x = j1_x + L2*np.cos(a12)
    j2_y = j1_y + L2*np.sin(a12)

    return x, y, phi, (j1_x, j1_y), (j2_x, j2_y)

# ── Test cases ──
L1, L2, L3 = 300, 200, 100  # mm

configs = [
    (0, 0, 0,       'Fully extended'),
    (0, 90, 0,      'Elbow up 90°'),
    (45, -45, 0,    'θ1=45°, θ2=-45°'),
    (90, 90, 90,    'Triple 90° — pointing back'),
    (30, 45, -30,   'Typical working pose'),
]

for th1, th2, th3, name in configs:
    x, y, phi, j1, j2 = fk_3dof_planar(L1, L2, L3, th1, th2, th3)
    reach = np.sqrt(x**2 + y**2)
    print(f"{name}:")
    print(f"  TCP: ({x:.1f}, {y:.1f}) mm, φ={phi:.1f}°")
    print(f"  Distance from base: {reach:.1f} mm (max={L1+L2+L3} mm)")

Workspace Analysis and Reachability

The workspace of a robot is the set of all positions its end-effector can reach. For a 2-DOF planar robot with links L₁ and L₂, the reachable workspace is an annulus (ring): outer radius = L₁ + L₂ (both joints fully extended), inner radius = |L₁ - L₂| (one joint folded completely back). Joint angle limits cut away portions of this ring — if joint 2 can only go from -150° to +150°, the inner exclusion zone changes. For 3D robots (6-DOF), the workspace becomes a complex 3D shape. Dexterous workspace — where every orientation is achievable — is much smaller. Boundary singularities occur at the outer and inner workspace boundaries (arm fully extended or fully folded). These are the configurations to avoid in motion planning.

Forward Kinematics — From Joints to Position
# Workspace boundary analysis — 2-DOF planar robot
import numpy as np
import matplotlib.pyplot as plt

L1, L2 = 300, 200  # mm

# Joint limits
th1_min, th1_max = -170, 170  # degrees
th2_min, th2_max = -150, 150  # degrees

# Sample configurations and compute FK
reachable_pts = []
for th1 in np.linspace(th1_min, th1_max, 180):
    for th2 in np.linspace(th2_min, th2_max, 180):
        th1_r = np.radians(th1)
        th2_r = np.radians(th2)
        a12 = th1_r + th2_r
        x = L1*np.cos(th1_r) + L2*np.cos(a12)
        y = L1*np.sin(th1_r) + L2*np.sin(a12)
        reachable_pts.append((x, y))

# Plot the workspace
px = [p[0] for p in reachable_pts]
py = [p[1] for p in reachable_pts]

plt.figure(figsize=(8, 8))
plt.scatter(px, py, s=0.5, alpha=0.3, c='#F97316')
plt.axis('equal')
plt.grid(True, alpha=0.3)
plt.title('2-DOF Planar Robot Workspace')
plt.xlabel('X (mm)')
plt.ylabel('Y (mm)')

# Theoretical boundaries
theta = np.linspace(0, 2*np.pi, 360)
plt.plot((L1+L2)*np.cos(theta), (L1+L2)*np.sin(theta), 'w--', label=f'Outer: {L1+L2}mm')
plt.plot(abs(L1-L2)*np.cos(theta), abs(L1-L2)*np.sin(theta), 'r--', label=f'Inner: {abs(L1-L2)}mm')
plt.legend()
plt.savefig('workspace.png', dpi=150, bbox_inches='tight')

# Key workspace metrics:
print(f"Max reach: {L1+L2} mm")
print(f"Min reach (with limits): {abs(L1-L2)} mm")
print(f"Workspace area estimate: {len(reachable_pts)} sampled points")

🔄 Inverse Kinematics — From Position to Joints

Inverse kinematics (IK) answers: "given a desired position (and orientation) for the end-effector, what joint angles must I set?" Unlike forward kinematics, IK may have zero, one, or many solutions. It is the hard problem of robotics. Every time a robot moves to a point you commanded — whether in a teach pendant, a CAM program, or a real-time vision-guided pick — the controller is solving IK. Understanding IK explains why robots sometimes cannot reach a point, why they take unexpected paths, and how to design robot workcells for maximum reachability.

🔄 2-DOF Inverse Kinematics — Click to Set Target

Click anywhere in the canvas to set the target position. The IK solver computes both elbow-up and elbow-down solutions instantly. Observe how both solutions are geometrically valid. Try clicking outside the reachable workspace.

250 mm
180 mm
Target X
Target Y
θ₁ (elbow down)
θ₂ (elbow down)
θ₁ (elbow up)
θ₂ (elbow up)
Reach
Status

Geometric IK: 2-DOF Planar Robot (The Elbow Problem)

The 2-DOF planar robot IK is solved geometrically and teaches the fundamental concepts of all IK. Given a target position (Px, Py), we need joint angles θ₁ and θ₂. The distance from base to target is r = √(Px² + Py²). This distance must be between |L₁-L₂| and L₁+L₂ — otherwise no solution exists (target out of workspace). The law of cosines gives the elbow angle directly: cos(θ₂) = (r² - L₁² - L₂²) / (2·L₁·L₂). The ± sign of the square root gives two solutions: "elbow up" and "elbow down" — the classic dual solution of all articulated robots. Then θ₁ is found using atan2. This two-solution geometry persists in 6-DOF robots as the "shoulder" and "elbow" configuration choices.

Inverse Kinematics — From Position to Joints
# Inverse kinematics: 2-DOF planar robot
import numpy as np

def ik_2dof_planar(Px, Py, L1, L2):
    """
    Closed-form IK for 2-DOF planar robot.
    Returns (th1_up, th2_up), (th1_down, th2_down) or None if out of reach.
    """
    r2 = Px**2 + Py**2
    r  = np.sqrt(r2)

    # --- Reachability check ---
    if r > L1 + L2:
        print(f"UNREACHABLE: target {r:.1f}mm exceeds max reach {L1+L2}mm")
        return None, None
    if r < abs(L1 - L2):
        print(f"UNREACHABLE: target {r:.1f}mm inside minimum reach {abs(L1-L2)}mm")
        return None, None

    # --- Elbow angle from law of cosines ---
    # cos(θ2) = (r² - L1² - L2²) / (2·L1·L2)
    cos_th2 = (r2 - L1**2 - L2**2) / (2 * L1 * L2)
    cos_th2 = np.clip(cos_th2, -1, 1)  # Numerical safety

    # Two solutions: + = elbow down, - = elbow up
    th2_down = np.arccos(cos_th2)
    th2_up   = -np.arccos(cos_th2)

    # --- Shoulder angle using atan2 ---
    # th1 = atan2(Py, Px) - atan2(L2·sin(th2), L1 + L2·cos(th2))
    def solve_th1(th2):
        beta = np.arctan2(L2 * np.sin(th2), L1 + L2 * np.cos(th2))
        return np.arctan2(Py, Px) - beta

    th1_down = solve_th1(th2_down)
    th1_up   = solve_th1(th2_up)

    return (
        (np.degrees(th1_down), np.degrees(th2_down)),
        (np.degrees(th1_up),   np.degrees(th2_up))
    )

# Example
L1, L2 = 300, 200
Px, Py = 250, 300

sol_down, sol_up = ik_2dof_planar(Px, Py, L1, L2)

if sol_down:
    th1_d, th2_d = sol_down
    th1_u, th2_u = sol_up
    print(f"Target: ({Px}, {Py}) mm")
    print(f"Elbow DOWN: θ1={th1_d:.2f}°, θ2={th2_d:.2f}°")
    print(f"Elbow UP:   θ1={th1_u:.2f}°, θ2={th2_u:.2f}°")

    # Verify by FK
    for th1, th2 in [sol_down, sol_up]:
        th1_r, th2_r = np.radians(th1), np.radians(th2)
        x = L1*np.cos(th1_r) + L2*np.cos(th1_r+th2_r)
        y = L1*np.sin(th1_r) + L2*np.sin(th1_r+th2_r)
        err = np.sqrt((x-Px)**2 + (y-Py)**2)
        print(f"  FK verify: ({x:.3f}, {y:.3f}) error={err:.6f}mm")

Analytical IK for 6-DOF Robots (Pieper's Method)

For 6-DOF robots, the IK problem involves 6 equations (position + orientation) with 6 unknowns (joint angles). Analytical (closed-form) solutions exist for robots where the last 3 joint axes intersect at a common point — the "wrist centre". This is the standard spherical wrist design used in almost all industrial robots (KUKA, ABB, FANUC, Yaskawa). Pieper's method (1968) decouples the position and orientation problems: (1) Find the wrist centre position from the desired TCP pose. (2) Solve the first 3 joints to position the wrist centre (geometric or algebraic). (3) Solve the last 3 joints to orient the end-effector (spherical wrist Euler angle solution). Each step yields 2 solutions, giving up to 2×2×2 = 8 total solutions.

Inverse Kinematics — From Position to Joints
# Analytical IK — 6-DOF robot with spherical wrist
# Conceptual implementation (simplified PUMA-type)
import numpy as np

def ik_6dof_spherical_wrist(T_desired, L1, L2, d4, d6, config='ULN'):
    """
    6-DOF IK with spherical wrist (Pieper's method).
    T_desired: 4x4 target TCP transform
    L1, L2: upper arm / forearm lengths
    d4: wrist offset along z
    d6: tool length from wrist to TCP
    config: 'ULN' = Up/Left/No-flip, 'DRF' = Down/Right/Flip
    Returns list of (th1..th6) solutions in degrees.
    """
    R = T_desired[:3, :3]
    p = T_desired[:3,  3]

    # ── STEP 1: Wrist centre ──────────────────────────────
    # Subtract tool offset along end-effector Z-axis
    # Wrist centre = TCP position - d6 × (last column of R)
    z_ee = R[:, 2]  # End-effector Z-axis in world frame
    p_wc = p - d6 * z_ee

    # ── STEP 2: Solve θ1 from wrist centre XY ───────────
    # Shoulder rotation: atan2(Y_wc, X_wc)
    th1_A = np.arctan2(p_wc[1], p_wc[0])
    th1_B = th1_A + np.pi  # Second shoulder solution

    solutions = []
    for th1 in [th1_A, th1_B]:
        c1, s1 = np.cos(th1), np.sin(th1)
        # Wrist centre in shoulder frame
        r_wc = np.sqrt(p_wc[0]**2 + p_wc[1]**2)
        z_wc = p_wc[2]

        # ── STEP 3: Solve θ2, θ3 from wrist position ───
        d = np.sqrt(r_wc**2 + z_wc**2)
        if d > L1 + L2 or d < abs(L1 - L2):
            continue  # Wrist unreachable

        cos_th3 = (d**2 - L1**2 - L2**2) / (2 * L1 * L2)
        cos_th3 = np.clip(cos_th3, -1, 1)

        for elbow_sign in [1, -1]:  # Up / Down
            th3 = elbow_sign * np.arccos(cos_th3)
            alpha = np.arctan2(z_wc, r_wc)
            beta  = np.arctan2(L2*np.sin(th3), L1 + L2*np.cos(th3))
            th2   = alpha - beta

            # ── STEP 4: Solve θ4, θ5, θ6 from orientation ─
            # R_03 = Rz(th1)·Ry(th2)·Ry(th3)  [simplified]
            # R_36 = R_03^T · R_desired
            # Extract ZYZ Euler angles from R_36 for th4, th5, th6
            # (implementation depends on specific robot DH)
            # Placeholder — robot-specific
            th4, th5, th6 = 0, 0, 0  # Must solve per robot

            solutions.append(
                tuple(np.degrees([th1, th2, th3, th4, th5, th6]))
            )

    return solutions

# Note: Real implementations use robot-specific DH tables.
# Industry libraries: RoboDK, KUKA.Sim, ABB RobotStudio,
#                    PyKDL, ikfast (OpenRAVE), TracIK (ROS)

Numerical IK: Jacobian Inverse and Damped Least Squares

When analytical IK is not available (redundant robots with more than 6-DOF, or non-standard geometry), numerical methods iteratively find joint angles that minimise the error between current and desired TCP pose. The Jacobian transpose method and pseudoinverse method are the foundations. At each iteration: compute current FK, compute Cartesian error, multiply by J⁺ (pseudoinverse) to get joint correction Δq, update joints q = q + α·Δq, repeat until error < tolerance. Convergence is not guaranteed near singularities. The damped least squares (DLS) method adds a damping factor λ: J⁺ = Jᵀ(JJᵀ + λ²I)⁻¹ — this prevents joint velocity explosion near singularities at the cost of slightly non-exact positioning. λ is often made variable: large near singularities, near-zero otherwise.

Inverse Kinematics — From Position to Joints
# Numerical IK — Jacobian-based iterative solver
import numpy as np

def numerical_ik(
    fk_func,       # Forward kinematics function: angles → T (4×4)
    jacobian_func, # Jacobian function: angles → J (6×n)
    target_T,      # Desired 4×4 end-effector transform
    q_init,        # Initial joint angles (degrees)
    max_iter=100,
    alpha=0.5,     # Step size
    lambda_dls=0.01,   # Damping factor
    tol_pos=0.1,   # Position tolerance (mm)
    tol_ori=0.01   # Orientation tolerance (rad)
):
    """
    Damped Least Squares (Levenberg-Marquardt) IK solver.
    Works for any robot with known FK and Jacobian.
    """
    q = np.array(q_init, dtype=float)
    target_p = target_T[:3, 3]
    target_R = target_T[:3, :3]

    for iteration in range(max_iter):
        # Current FK
        T_curr = fk_func(q)
        curr_p = T_curr[:3, 3]
        curr_R = T_curr[:3, :3]

        # Position error
        ep = target_p - curr_p  # 3×1

        # Orientation error (axis-angle representation)
        R_err = target_R @ curr_R.T
        # Skew-symmetric part gives rotation vector
        eo = 0.5 * np.array([
            R_err[2,1] - R_err[1,2],
            R_err[0,2] - R_err[2,0],
            R_err[1,0] - R_err[0,1]
        ])

        # Combined error vector [pos_err; ori_err]
        e = np.concatenate([ep, eo])  # 6×1

        # Convergence check
        if np.linalg.norm(ep) < tol_pos and np.linalg.norm(eo) < tol_ori:
            print(f"Converged at iteration {iteration}")
            print(f"  Position error: {np.linalg.norm(ep):.4f} mm")
            print(f"  Orientation error: {np.linalg.norm(eo):.6f} rad")
            return q, True

        # Jacobian at current configuration
        J = jacobian_func(q)  # 6×n

        # Damped Least Squares pseudo-inverse
        # J+ = J^T (J J^T + λ²I)^(-1)
        n = J.shape[1]
        JJT = J @ J.T
        J_dls = J.T @ np.linalg.inv(JJT + lambda_dls**2 * np.eye(6))

        # Joint angle update
        dq = alpha * J_dls @ e
        q = q + dq

        if iteration % 10 == 0:
            print(f"Iter {iteration:3d}: pos_err={np.linalg.norm(ep):.2f}mm")

    print(f"WARNING: Did not converge after {max_iter} iterations")
    return q, False

📊 The Jacobian Matrix — Velocity, Force & Singularities

The Jacobian is the matrix that maps joint velocities to end-effector velocities. It is the velocity-level equivalent of forward kinematics: ẋ = J(q)·q̇. But the Jacobian is far more than a velocity tool — it reveals singularities (configurations where the robot loses a degree of freedom), maps forces between joint space and task space, enables real-time motion control, and is the foundation of all modern impedance and force control. Understanding the Jacobian deeply separates a robotics engineer from someone who just programs waypoints.

📊 Jacobian & Manipulability Ellipse — Singularity Explorer

The manipulability ellipse shows how easily the end-effector can move in each direction. A large, circular ellipse = well-conditioned (far from singularity). A flat, needle-shaped ellipse = near singularity. The condition number is the ratio of the largest to smallest semi-axis. Try stretching the arm fully (θ₂→0° or ±180°) to see the singularity collapse.

250 mm
180 mm
30°
60°
det(J)
Condition number
Manipulability
σ_min / σ_max
TCP X
TCP Y

Geometric Jacobian: Column-by-Column Construction

The geometric Jacobian has one column per joint. For revolute joint i: the column is the cross product of the joint's z-axis with the vector from joint i to the end-effector (for the translational part), and simply the joint's z-axis (for the rotational part). For prismatic joint i: the translational part is the joint's z-axis (the direction of linear motion), and the rotational part is zero (prismatic joints don't rotate the end-effector). This geometric interpretation makes the Jacobian intuitive: each column tells you "how much does the end-effector move when this joint moves by one unit?" The column for a joint far from the end-effector has a large translational component; a joint right next to the end-effector contributes mainly rotation.

The Jacobian Matrix — Velocity, Force & Singularities
# Geometric Jacobian construction
import numpy as np

def geometric_jacobian(joint_angles, dh_params, joint_types):
    """
    Compute geometric Jacobian for a serial manipulator.
    joint_types: list of 'R' (revolute) or 'P' (prismatic)
    Returns J (6 × n_joints)
    """
    n = len(joint_angles)
    J = np.zeros((6, n))

    # First, compute all forward transforms up to each joint
    # and the final end-effector position
    transforms = [np.eye(4)]  # T_0_0 = identity
    T = np.eye(4)
    for i, (angle, params, jtype) in enumerate(zip(joint_angles, dh_params, joint_types)):
        Ti = dh_transform(angle + params[0], params[1], params[2], params[3])
        T = T @ Ti
        transforms.append(T.copy())

    # End-effector position
    p_ee = transforms[-1][:3, 3]

    # Build Jacobian column by column
    for i in range(n):
        Ti = transforms[i]      # Transform up to joint i
        z_i = Ti[:3, 2]        # Z-axis of frame i (joint axis)
        p_i = Ti[:3, 3]        # Origin of frame i

        if joint_types[i] == 'R':  # Revolute joint
            # Translational part: z_i × (p_ee - p_i)
            J[:3, i] = np.cross(z_i, p_ee - p_i)
            # Rotational part: z_i
            J[3:, i] = z_i
        else:                       # Prismatic joint
            # Translational part: z_i (direction of motion)
            J[:3, i] = z_i
            # Rotational part: 0 (no rotation)
            J[3:, i] = 0

    return J

# Example: 2-DOF planar robot Jacobian
L1, L2 = 300, 200
th1, th2 = np.radians(30), np.radians(45)

# Manual closed-form Jacobian for 2-DOF planar:
# J = [-L1·s1 - L2·s12,  -L2·s12]
#     [ L1·c1 + L2·c12,   L2·c12]
#     [       1,                1 ]  (for 3-DOF: add rotational row)

a12 = th1 + th2
c1, s1 = np.cos(th1), np.sin(th1)
c12, s12 = np.cos(a12), np.sin(a12)

J_2dof = np.array([
    [-L1*s1 - L2*s12, -L2*s12],
    [ L1*c1 + L2*c12,  L2*c12]
])

print("Jacobian at θ1=30°, θ2=45°:")
print(np.round(J_2dof, 1))

# Condition number — measure of how far from singularity
cond = np.linalg.cond(J_2dof)
print(f"Condition number: {cond:.2f}")
print(f"  < 10: well-conditioned, >> 100: near singularity")

# Determinant (2×2 case)
det = np.linalg.det(J_2dof)
print(f"det(J) = {det:.1f}  {'(non-singular)' if abs(det) > 1 else '(SINGULAR!)'}")

Singularities: Types, Detection, and Avoidance

A kinematic singularity is a configuration where the Jacobian loses rank — its determinant becomes zero. Physically, the robot loses one or more degrees of freedom: the end-effector cannot move in at least one direction regardless of what the joints do. Singularities are not bugs — they are a fundamental geometric property of every non-trivial serial manipulator. There are three types for 6-DOF robots with spherical wrists: (1) Boundary singularities: arm fully extended or fully folded — wrist centre on the outer or inner workspace boundary. (2) Shoulder singularity: joints 1 and 6 become collinear. (3) Wrist singularity (the most common): joint 5 angle = 0° — joints 4 and 6 become collinear and only their sum can be determined, not each individually. Detection: compute det(J) or the minimum singular value of J. Near singularity: use damped least squares instead of J⁻¹.

The Jacobian Matrix — Velocity, Force & Singularities
# Singularity detection and damped least squares avoidance
import numpy as np

def singularity_measure(J):
    """
    Multiple measures of distance from singularity.
    J: m × n Jacobian matrix
    """
    # Singular Value Decomposition
    U, sigma, Vt = np.linalg.svd(J)

    min_sv  = sigma[-1]   # Smallest singular value → 0 at singularity
    max_sv  = sigma[0]    # Largest singular value
    cond    = max_sv / (min_sv + 1e-12)  # Condition number
    manip   = np.prod(sigma)             # Yoshikawa manipulability measure
    # manip = sqrt(det(J @ J.T)) — proportional to volume of manipulability ellipsoid

    return {
        'min_singular_value': min_sv,
        'condition_number':   cond,
        'manipulability':     manip,
        'singular_values':    sigma,
        'is_singular':        min_sv < 1e-3,
        'near_singular':      min_sv < 5.0   # threshold is application-dependent
    }

def damped_least_squares_ik_step(J, e, lam_min=0.0, lam_max=0.05, sv_threshold=5.0):
    """
    One step of damped least squares (DLS) IK.
    Variable damping: large λ near singularity, small λ far away.
    J: 6×n Jacobian
    e: 6×1 error vector
    Returns joint velocity correction dq.
    """
    U, sigma, Vt = np.linalg.svd(J)

    # Variable damping based on smallest singular value
    min_sv = sigma[-1]
    if min_sv < sv_threshold:
        # Ramp λ from lam_min to lam_max as singularity approaches
        lam = lam_max * (1 - min_sv / sv_threshold)**2
    else:
        lam = lam_min

    # DLS pseudo-inverse with variable damping
    sigma_damped = sigma / (sigma**2 + lam**2)

    # Apply damped pseudo-inverse
    dq = Vt.T @ np.diag(sigma_damped) @ U.T @ e

    return dq, lam

# ── Wrist singularity example ──
# When θ5 = 0° (or 180°), joints 4 and 6 are aligned
# Only the sum (θ4 + θ6) is determined by the desired orientation
# The controller must handle this with null-space projection or lock one joint

print("Singularity avoidance strategies:")
print("1. Path planning: avoid configurations where min_sv < threshold")
print("2. DLS: use damped inverse, accept slight position error near singular")
print("3. Configuration monitoring: alert operator, stop or slow motion")
print("4. Null-space motion: for redundant robots (7-DOF+), use extra DOF")
print("   to maintain manipulability while tracking the desired path")

Force/Torque Mapping via the Jacobian Transpose

The Jacobian transpose maps forces and torques between Cartesian space and joint space. The principle of virtual work: if a Cartesian force/moment F acts on the end-effector, the equivalent joint torques are τ = Jᵀ·F. This is the dual of velocity kinematics (ẋ = J·q̇ ↔ τ = Jᵀ·F). This relationship is fundamental for force control: force sensors at the wrist measure F, and the controller commands joint torques τ = Jᵀ·F to produce compliant behavior. Impedance control combines position and force control: the robot behaves like a spring-damper system in Cartesian space. Near singularities, the Jacobian transpose becomes ill-conditioned: small joint torques produce large Cartesian forces (amplification), which is dangerous.

The Jacobian Matrix — Velocity, Force & Singularities
# Jacobian transpose force/torque mapping
import numpy as np

# Scenario: robot holding a workpiece against a surface
# A 50 N contact force acts on the TCP in the -Z direction
# What joint torques does this require?

# Example Jacobian (6×6 for 6-DOF robot at some configuration)
# In practice, compute from robot geometry
J = np.array([
    [ 0.00, -400.0,  -200.0,  0.00, -150.0,  0.00],
    [350.0,    0.0,     0.0, 50.00,    0.0, 30.00],
    [  0.0,  350.0,   350.0,  0.00,   50.0,  0.00],
    [  0.0,    0.0,     0.0,  1.00,    0.0,  0.00],
    [  0.0,    1.0,     1.0,  0.00,    1.0,  0.00],
    [  1.0,    0.0,     0.0,  0.00,    0.0,  1.00]
])  # mm and radians

# Cartesian wrench: force [Fx, Fy, Fz] and moment [Mx, My, Mz]
F_contact = np.array([0, 0, -50, 0, 0, 0])  # 50N in -Z direction

# Joint torques via Jacobian transpose
# τ = J^T · F
# (convert J from mm to m for N·m units: divide position part by 1000)
J_m = J.copy()
J_m[:3, :] /= 1000  # Convert mm → m for translational rows

tau = J_m.T @ F_contact

print("Contact force: 50N in -Z direction")
print("Required joint torques:")
for i, t in enumerate(tau):
    print(f"  Joint {i+1}: {t:+.2f} N·m")

# Force ellipsoid analysis
# The force ellipsoid describes which directions the robot can
# apply/resist forces most effectively
JJT = J_m[:3,:] @ J_m[:3,:].T  # Position part only
eigenvalues = np.linalg.eigvalsh(JJT)
print(f"\nForce transmission ratio:")
print(f"  Best direction: {np.sqrt(eigenvalues[-1]):.2f} N per N·m")
print(f"  Worst direction: {np.sqrt(eigenvalues[0]):.2f} N per N·m")
print(f"  Isotropy: {np.sqrt(eigenvalues[0]/eigenvalues[-1]):.3f}")
print(f"  (1.0 = perfectly isotropic, 0 = at singularity)")

📈 Trajectory Planning — Smooth, Fast, Safe Motion

A trajectory defines how the robot moves from one configuration to another over time — not just where to go, but how to get there. A trajectory has three components: geometry (the path shape in space), timing (when to be where along the path), and constraints (velocity, acceleration, jerk limits, and obstacle avoidance). Good trajectory planning makes robots fast without jerking parts loose, smooth without overshooting, and safe without triggering joint limit faults. Poor trajectory planning is the single most common cause of production downtime in robot cells.

📈 Trajectory Profile Comparator — Joint Motion Planner

Compare trapezoidal (LSPB), cubic polynomial, and quintic polynomial profiles for the same move. The position, velocity, and acceleration curves are shown. Observe how quintic produces zero velocity AND acceleration at start/end — the smoothest possible motion. The area under the velocity curve always equals the total displacement.

90°
1.00 s
0.25
Trap peak velocity
Trap peak accel
Cubic peak velocity
Quintic peak vel
⏱ Cycle Time Optimiser — Robot Move Sequence

Simulate a pick-and-place sequence. Toggle blend zones (z50 vs fine) and observe the effect on total cycle time. The timeline shows each move segment. This demonstrates why 'fine' stop at every waypoint is the #1 cause of slow robot cells.

600 mm/s
100 mm/s
60 mm
Fine-stop cycle
Blend zone cycle
Time saved
Throughput gain

Joint Space Trajectories: Polynomial and Trapezoidal Profiles

Joint space trajectories interpolate joint angles directly. The simplest: linear interpolation (constant velocity) — but this produces infinite acceleration at the start and end. Better: trapezoidal velocity profile (LSPB — Linear Segments with Parabolic Blends): constant acceleration phase, constant velocity phase, constant deceleration phase. Even better: cubic polynomial: q(t) = a₀ + a₁t + a₂t² + a₃t³, matching position and velocity at start and end (4 constraints, 4 coefficients). Best: quintic polynomial or minimum-jerk profiles that also constrain acceleration at endpoints, giving the smoothest possible motion. In practice, controllers use trapezoidal or S-curve (jerk-limited trapezoidal) for the vast majority of moves.

Trajectory Planning — Smooth, Fast, Safe Motion
# Trajectory profile generators
import numpy as np

def trapezoid_profile(q_start, q_end, v_max, a_max, dt=0.001):
    """
    Trapezoidal velocity profile (LSPB).
    Returns time and joint position arrays.
    v_max: maximum velocity (deg/s)
    a_max: maximum acceleration (deg/s²)
    dt: time step (seconds)
    """
    delta_q = q_end - q_start
    sign = np.sign(delta_q)
    dq = abs(delta_q)

    # Time to reach v_max from zero
    t_acc = v_max / a_max
    # Distance covered during acceleration
    d_acc = 0.5 * a_max * t_acc**2

    if 2 * d_acc > dq:
        # Triangle profile (v_max never reached)
        v_peak = np.sqrt(a_max * dq)
        t_acc = v_peak / a_max
        t_total = 2 * t_acc
        t_const = 0
    else:
        # Full trapezoidal
        v_peak = v_max
        d_const = dq - 2 * d_acc
        t_const = d_const / v_max
        t_total = 2 * t_acc + t_const

    t = np.arange(0, t_total + dt, dt)
    q = np.zeros_like(t)

    for i, ti in enumerate(t):
        if ti <= t_acc:
            # Acceleration phase
            q[i] = 0.5 * a_max * ti**2
        elif ti <= t_acc + t_const:
            # Constant velocity
            q[i] = d_acc + v_peak * (ti - t_acc)
        else:
            # Deceleration
            td = ti - t_acc - t_const
            q[i] = d_acc + v_peak * t_const + v_peak * td - 0.5 * a_max * td**2

    return t, q_start + sign * np.clip(q, 0, dq)


def quintic_poly(q0, qf, qd0, qdf, qdd0, qddf, T):
    """
    Quintic polynomial trajectory.
    Matches position, velocity, and acceleration at start and end.
    Returns coefficients [a0..a5] where q(t) = sum(ai * t^i)
    """
    # 6 boundary conditions → 6 coefficients
    # [1  0   0    0     0      0  ] [a0]   [q0  ]
    # [0  1   0    0     0      0  ] [a1]   [qd0 ]
    # [0  0   2    0     0      0  ] [a2] = [qdd0]
    # [1  T  T²   T³    T⁴     T⁵ ] [a3]   [qf  ]
    # [0  1  2T  3T²   4T³    5T⁴ ] [a4]   [qdf ]
    # [0  0   2  6T   12T²   20T³ ] [a5]   [qddf]

    M = np.array([
        [1, 0,    0,     0,       0,        0    ],
        [0, 1,    0,     0,       0,        0    ],
        [0, 0,    2,     0,       0,        0    ],
        [1, T,  T**2,  T**3,   T**4,     T**5   ],
        [0, 1,  2*T,  3*T**2, 4*T**3,   5*T**4  ],
        [0, 0,    2,    6*T,  12*T**2,  20*T**3 ]
    ])
    b = np.array([q0, qd0, qdd0, qf, qdf, qddf])
    a = np.linalg.solve(M, b)

    # Evaluate at time samples
    t = np.linspace(0, T, int(T / 0.001))
    q = sum(a[i] * t**i for i in range(6))
    return t, q, a

# Compare profiles for same move
t_trap, q_trap = trapezoid_profile(0, 90, 120, 480)
t_quin, q_quin, _ = quintic_poly(0, 90, 0, 0, 0, 0, 1.0)
print(f"Trapezoidal: {t_trap[-1]:.3f}s to move 90°")
print(f"Quintic:     {t_quin[-1]:.3f}s  (same time, smoother motion)")

Cartesian Space Trajectories: Linear and Arc Moves

Cartesian trajectory planning interpolates the TCP position and orientation directly in task space. Linear move (LIN): the TCP travels in a straight line from A to B. The orientation interpolates smoothly using quaternion SLERP. At each small time step, the target Cartesian pose is converted to joint angles via IK. This is used for welding seams, cutting paths, and any application where the tool path shape matters. Arc move (CIRC): defines a circle through three points. The robot follows the arc at constant TCP speed. Critical difference from joint space: Cartesian paths can pass through singularities, causing the robot to stop or behave erratically. Always check the path for singularity proximity before running at production speed.

Trajectory Planning — Smooth, Fast, Safe Motion
# Cartesian trajectory planning with quaternion interpolation
import numpy as np

def quat_from_rotation_matrix(R):
    """Extract quaternion [w, x, y, z] from 3×3 rotation matrix."""
    trace = R[0,0] + R[1,1] + R[2,2]
    if trace > 0:
        s = 0.5 / np.sqrt(trace + 1)
        w = 0.25 / s
        x = (R[2,1] - R[1,2]) * s
        y = (R[0,2] - R[2,0]) * s
        z = (R[1,0] - R[0,1]) * s
    else:
        # Simplified — use full Shepperd's method in production
        w, x, y, z = 1, 0, 0, 0
    q = np.array([w, x, y, z])
    return q / np.linalg.norm(q)

def slerp(q0, q1, t):
    """
    Spherical Linear Interpolation between two quaternions.
    t=0 → q0, t=1 → q1.
    Produces the shortest, constant-speed rotation.
    """
    # Ensure same hemisphere (shortest path)
    if np.dot(q0, q1) < 0:
        q1 = -q1
    dot = np.clip(np.dot(q0, q1), -1, 1)
    if dot > 0.9995:
        # Very close — linear interpolation
        q = q0 + t * (q1 - q0)
        return q / np.linalg.norm(q)
    angle = np.arccos(dot)
    return (np.sin((1-t)*angle)*q0 + np.sin(t*angle)*q1) / np.sin(angle)

def linear_cartesian_trajectory(
    T_start, T_end, v_tcp=100, a_tcp=500, dt=0.001
):
    """
    Plan a straight-line TCP trajectory from T_start to T_end.
    v_tcp: TCP speed (mm/s)
    a_tcp: TCP acceleration (mm/s²)
    dt:    time step (s)
    Returns list of 4×4 target transforms at each time step.
    """
    p0 = T_start[:3, 3]
    p1 = T_end[:3,   3]
    R0 = T_start[:3, :3]
    R1 = T_end[:3,   :3]

    q0 = quat_from_rotation_matrix(R0)
    q1 = quat_from_rotation_matrix(R1)

    distance = np.linalg.norm(p1 - p0)
    direction = (p1 - p0) / (distance + 1e-9)

    # Trapezoidal speed profile for TCP speed
    t_acc = v_tcp / a_tcp
    d_acc = 0.5 * a_tcp * t_acc**2

    if 2 * d_acc > distance:
        v_peak = np.sqrt(a_tcp * distance)
        t_acc = v_peak / a_tcp
        t_total = 2 * t_acc
    else:
        t_total = 2 * t_acc + (distance - 2 * d_acc) / v_tcp

    waypoints = []
    for ti in np.arange(0, t_total + dt, dt):
        # TCP distance along path
        s = min(ti / t_total, 1.0)  # Normalised 0→1

        # Position: linear interpolation
        p = p0 + s * (p1 - p0)

        # Orientation: SLERP
        q = slerp(q0, q1, s)
        # Convert quaternion back to rotation matrix
        w, x, y, z = q
        R = np.array([
            [1-2*(y*y+z*z),  2*(x*y-w*z),  2*(x*z+w*y)],
            [2*(x*y+w*z),  1-2*(x*x+z*z),  2*(y*z-w*x)],
            [2*(x*z-w*y),   2*(y*z+w*x), 1-2*(x*x+y*y)]
        ])

        T = np.eye(4)
        T[:3, :3] = R
        T[:3,  3] = p
        waypoints.append(T)

    return waypoints, t_total

print(f"SLERP interpolates the SHORTEST rotation between two orientations.")
print(f"Unlike Euler angle interpolation, SLERP never:")
print(f"  - Goes through unexpected orientations")
print(f"  - Suffers gimbal lock")
print(f"  - Produces non-constant angular velocity")
print(f"All modern robot controllers use quaternion SLERP for orientation interpolation.")

Real-World Timing: Cycle Time Optimisation

In production, every millisecond of cycle time affects throughput and payback period. A robot performing 1000 pick-and-place cycles per hour, reduced by 0.1 seconds per cycle, saves 100 seconds/hour = 28 hours/year. Cycle time optimisation techniques: (1) Blend (zone) between moves — instead of stopping at each waypoint, the robot begins the next move while still approaching the current one, rounding corners at speed. (2) Reduce travel distance — optimise approach and retreat vectors to be as short as possible. (3) Use maximum allowed speed for each segment — not all segments need to be slow. (4) Parallelise — trigger peripherals (grippers, conveyors) during robot motion, not after it stops. (5) Increase acceleration — modern cobots can run higher accelerations than their defaults allow with proper payload declaration.

Trajectory Planning — Smooth, Fast, Safe Motion
// Industrial robot cycle time analysis (pseudocode/ABB RAPID style)
//
// ═══════════════════════════════════════════════════
// Example: pick-and-place with cycle time optimisation
// ═══════════════════════════════════════════════════
//
// BEFORE optimisation (stop at each point):
// MoveJ pApproach, v500, fine, tool0;   // to approach
// MoveL pPick,     v100, fine, tool0;   // slow down to pick
// GripperClose;                          // grip
// MoveL pApproach, v100, fine, tool0;   // retreat
// MoveJ pPlaceApp, v500, fine, tool0;   // to place approach
// MoveL pPlace,    v100, fine, tool0;   // slow down to place
// GripperOpen;                           // release
// MoveL pPlaceApp, v100, fine, tool0;   // retreat
// Cycle time: ~4.2 seconds
//
// AFTER optimisation (blend zones + parallelism):
// MoveJ pApproach, v1000, z50, tool0;   // approach at full speed, blend
// MoveL pPick,     v100,  fine, tool0;  // slow for pick only
// GripperClose;                          // grip
// MoveL pApproach, v300,  z30, tool0;   // faster retreat, blend
// MoveJ pPlaceApp, v1000, z50, tool0;   // fast transit, blend
// MoveL pPlace,    v150,  fine, tool0;  // release
// GripperOpen;                           // can overlap with next move start
// MoveL pPlaceApp, v300,  z50, tool0;   // faster retreat
// Cycle time: ~2.1 seconds  (50% improvement!)
//
// Zone (z) values: z5=5mm, z10=10mm, z50=50mm, z100=100mm
// z50 means: start next move when within 50mm of waypoint
// fine = stop exactly at point (needed only for pick/place)
//
// KEY RULE: Only use 'fine' (exact stop) where the process
// actually requires it. Using 'fine' everywhere is the
// most common cause of unnecessarily long cycle times.
//
// Parallel execution example (ABB RAPID):
// MoveJ pTransit, v1000, z100, tool0;
// WaitDI diConveyorReady, 1;  // Wait for conveyor while moving!
// — Robot is moving AND waiting simultaneously —
// MoveL pNextPick, v200, fine, tool0;

🏭 Robot Types, Real Applications & Safety

Kinematics theory meets reality in the diversity of robot architectures and the demands of industrial applications. Every robot architecture — SCARA, delta, Cartesian, 6-DOF serial, 7-DOF cobot, parallel — has its own kinematic equations, workspace shape, and optimal application domain. Understanding which architecture to choose, and how to deploy it safely, is the final step from theory to engineering practice.

🏭 Robot Architecture Selector — Find the Right Type

Answer the application requirements and get a scored recommendation for each robot architecture. This is the engineering decision matrix used by system integrators.

Required payload:

Speed priority:

Work type:

Human proximity:

🏭 Brand Reference — Major Robot Manufacturers
KUKA
🇩🇪 Germany · kuka.com
Automotive benchmark: KR series 6-DOF, LBR iiwa 7-DOF cobot, KMR mobile robots. KUKA.Sim offline programming.
e.g. KR 6 R700, KR 60, LBR iiwa 7 R800
ABB Robotics
🇨🇭 Switzerland · abb.com/robotics
RobotStudio OLP, FlexPicker delta, IRB range, YuMi dual-arm cobot. 1600+ robot models.
e.g. IRB 1200, IRB 6700, FlexPicker, YuMi
FANUC
🇯🇵 Japan · fanuc.com
Highest market share globally, M-series, CR collaborative, very high reliability, ROBOGUIDE OLP.
e.g. M-20iD, R-2000iC, CR-7iA, M-3iA delta
Universal Robots
🇩🇰 Denmark · universal-robots.com
Dominant cobot market share, e-Series UR3e/5e/10e/16e/20, PolyScope OS, 50,000+ URcap apps.
e.g. UR5e, UR10e, UR20
Yaskawa / Motoman
🇯🇵 Japan · yaskawa.com
HC series cobots, MH series 6-DOF, SIA 7-DOF redundant. MotoSim offline programming.
e.g. HC10DTP, MH12, SIA10F
KUKA LBR iiwa
🇩🇪 Germany · kuka.com
7-DOF redundant cobot, torque sensors in all joints, compliant manipulation, research favourite.
e.g. LBR iiwa 7 R800, LBR iiwa 14 R820
Doosan Robotics
🇰🇷 South Korea · doosanrobotics.com
M/H/A series cobots, collision detection, ROS2 support, competitive pricing, growing ecosystem.
e.g. M1013, H2017, A0509S
Franka Emika
🇩🇪 Germany · franka.com
Research and education standard, 7-DOF, torque sensing, open API (libfranka), Panda/FR3.
e.g. Panda, FR3
🛡 Safety Standards Reference
StandardScopeKey requirement
ISO 10218-1Industrial robot designRobot manufacturer safety requirements (safety-rated stops, speed monitoring, force limits)
ISO 10218-2Robot integrationSystem integrator requirements — risk assessment, safeguarding, installation
ISO/TS 15066Collaborative operationPFL force/pressure limits by body region, SSM speed/separation requirements
EN ISO 13849-1Safety functionsPerformance Level (PL a–e) for safety functions — PLd or PLe required for most robot safety
IEC 62061Functional safetySafety Integrity Level (SIL 1–3) — alternative to EN ISO 13849-1 for complex systems
ANSI/RIA R15.06North America robotsUS/Canada equivalent of ISO 10218, mandatory for OSHA compliance

Robot Architecture Comparison: Serial vs Parallel vs SCARA

Serial manipulators (KUKA KR, ABB IRB, FANUC R, Yaskawa Motoman): joints in a chain, each motor carries all subsequent links. Large workspace but limited stiffness — error compounds through the chain. Used for: welding, painting, machine tending, assembly. Parallel robots (ABB FlexPicker/delta, Gough-Stewart platform): all actuators connect directly to the end-effector platform via parallel kinematic chains. Very high stiffness, high speed, light moving mass — but small workspace and complex kinematics. Used for: high-speed pick-and-place (food, pharma, electronics), flight simulators, hexapod machining. SCARA (Selective Compliance Assembly Robot Arm): 3-DOF in horizontal plane + Z-axis. Extremely fast for planar assembly. Limited to horizontal work. Used for: PCB assembly, small part insertion, dispensing. Cartesian / Gantry: XYZ slides, often 3-DOF. Large work volume, high stiffness, simple kinematics (no rotational coupling). Used for: CNC, large format printing, palletising over large areas.

Robot Types, Real Applications & Safety
// Robot architecture selection guide
//
// ═══════════════════════════════════════════════════════════
// Architecture    Workspace    Speed      Stiffness   DOF
// ───────────────────────────────────────────────────────────
// 6-DOF serial    Large        Medium     Low-Med     6
// 7-DOF cobot     Large        Low-Med    Med         7
// SCARA           Medium(2D)   High       High(XY)    4
// Delta/FlexPick  Small        Very High  High        3+1
// Cartesian/gant  Very Large   Low-Med    Very High   3-5
// Stewart/hexapd  Small        Med        Very High   6
//
// ── Speed comparison (cycle time for 25mm pick-and-place) ──
// ABB FlexPicker IRB 360:   0.58 s / pick  (up to 150 picks/min)
// FANUC M-3iA (delta):      0.60 s / pick  (up to 130 picks/min)
// Adept SCARA eCobra 600:   0.45 s / pick  (up to 120 picks/min)
// ABB IRB 1200 (6-DOF):     0.90 s / pick  (up to 67 picks/min)
// Universal Robots UR5e:    1.20 s / pick  (up to 50 picks/min)
//
// ── Payload vs speed tradeoff ──
// Higher payload → larger motors → heavier arm → lower max speed
// Low payload (≤3kg):  delta, SCARA — maximum throughput
// Med payload (3-20kg):  SCARA, 6-DOF mid-size
// High payload (20-100kg): 6-DOF, gantry
// Very high (100-1300kg): Large 6-DOF (automotive body, aerospace)
//
// ── Reach vs payload chart (6-DOF serial robots) ──
// Payload  5kg   → KUKA KR 6 R700:   700mm reach
// Payload  10kg  → ABB IRB 1200:     901mm reach
// Payload  20kg  → FANUC M-20iD:    1813mm reach
// Payload  50kg  → KUKA KR 60:      2033mm reach
// Payload 120kg  → ABB IRB 6700:    3200mm reach
// Payload 210kg  → FANUC R-2000iC:  2655mm reach
// Payload 1300kg → KUKA KR 1300 PA: 3300mm reach

Collaborative Robots (Cobots): 7-DOF Kinematics and Safety

Cobots (Universal Robots, FANUC CR, KUKA LBR iiwa, ABB YuMi, Doosan, Techman) are designed to work safely alongside humans without traditional safety caging. They achieve safety through power and force limiting (PFL), speed and separation monitoring (SSM), and hand-guiding. The KUKA LBR iiwa and Franka Emika have 7 DOF — one more than the minimum 6 needed for full position and orientation control. This extra DOF (redundancy) allows the robot to change its arm configuration while keeping the end-effector fixed — like a human who can move their elbow sideways while holding a coffee cup still. This enables singularity avoidance, obstacle avoidance, and optimisation of joint torques during motion. The kinematic redundancy is resolved using null-space projection: project desired motions into the null space of the Jacobian.

Robot Types, Real Applications & Safety
# 7-DOF redundant robot: null-space motion
import numpy as np

def null_space_projection(J):
    """
    Compute null-space projector for a Jacobian J (m × n, n > m).
    N = I - J^+ J
    Any vector in the null space: dq = N · z (for any z)
    produces dq that does NOT change end-effector velocity.
    """
    J_pinv = np.linalg.pinv(J)  # Moore-Penrose pseudo-inverse
    N = np.eye(J.shape[1]) - J_pinv @ J
    return N, J_pinv

def redundant_ik_step(J, e, q, q_preferred, w_null=0.3):
    """
    IK step for redundant robot (n > 6).
    Uses null-space motion to avoid singularities or joint limits.
    e:           6×1 Cartesian error
    q:           current joint angles (n×1)
    q_preferred: preferred joint configuration (n×1)
    w_null:      null-space weight (how strongly to attract to preferred)
    Returns delta_q.
    """
    N, J_pinv = null_space_projection(J)

    # Primary task: minimise Cartesian error
    dq_primary = J_pinv @ e

    # Secondary task (null-space): move toward preferred configuration
    # This could be: maximise manipulability, avoid joint limits,
    # avoid obstacles, minimise torque, etc.
    z = w_null * (q_preferred - q)  # Gradient toward preferred config
    dq_secondary = N @ z

    return dq_primary + dq_secondary

# Cobot safety zones (ISO/TS 15066)
print("Cobot operating modes (ISO 10218 / ISO/TS 15066):")
print("")
print("SSM — Speed and Separation Monitoring:")
print("  Measure distance to human (LiDAR/camera)")
print("  Reduce TCP speed proportional to proximity")
print("  v_tcp_max = v_safe × d_human / d_safety_zone")
print("  At contact distance: stop within 0.15s (protective stop)")
print("")
print("PFL — Power and Force Limiting:")
print("  Robot continuously monitors joint torques")
print("  Contact force = torque_measured - torque_expected (from model)")
print("  At contact force > F_limit: enter safe stop")
print("  F_limit per ISO/TS 15066 Table A.2: 65-160N by body region")
print("")
print("HG — Hand Guiding:")
print("  Operator holds handle with 6-axis F/T sensor")
print("  Robot follows applied force in Cartesian admittance mode")
print("  F_applied → Cartesian velocity via admittance controller")
print("  τ = J^T F_applied  (Jacobian transpose mapping)")

Calibration: Why Your Robot Is Never Where You Think

A nominal robot model (DH parameters from the datasheet) has errors. Link lengths are machined to ±0.1 mm, joint offsets differ from the nominal by up to ±0.5°, gear elasticity introduces compliance under load, and thermal expansion shifts everything by ±0.05 mm/°C. An uncalibrated 6-DOF robot has absolute positioning accuracy of 0.5–5 mm even though repeatability (returning to the same point) is ±0.02–0.05 mm. Calibration identifies the true DH parameters by measuring the actual TCP position at many configurations with an external reference (laser tracker, photogrammetry, calibrated gauge). After calibration, absolute accuracy improves to ±0.1–0.3 mm. This is essential for offline programming (OLP) where programs are generated in simulation and run on the real robot without teaching every point.

Robot Types, Real Applications & Safety
# Robot geometric calibration — identification of real DH parameters
import numpy as np
from scipy.optimize import least_squares

def fk_with_params(q_all, nominal_dh, param_deltas):
    """
    FK with perturbed DH parameters for calibration.
    nominal_dh: n × 4 array of nominal [theta_off, d, a, alpha]
    param_deltas: flattened array of parameter corrections
    Returns TCP position and orientation for each configuration.
    """
    # Reshape deltas: 4 corrections per joint (Δθ_off, Δd, Δa, Δα)
    deltas = param_deltas.reshape(len(nominal_dh), 4)
    true_dh = nominal_dh + deltas

    results = []
    for q in q_all:
        T = np.eye(4)
        for i, (dh, qi) in enumerate(zip(true_dh, q)):
            th_off, d, a, alpha = dh
            T = T @ dh_transform(qi + th_off, d, a, alpha)
        results.append(T[:3, 3])  # TCP position
    return results

def calibration_residuals(param_deltas, q_all, measured_positions, nominal_dh):
    """
    Residual function for least-squares calibration.
    Minimise sum of (FK_position - measured_position)² over all configs.
    """
    predicted = fk_with_params(q_all, nominal_dh, param_deltas)
    residuals = []
    for pred, meas in zip(predicted, measured_positions):
        residuals.extend(pred - meas)  # 3 residuals per configuration
    return residuals

# ── Calibration procedure ──
# 1. Move robot to 30-50+ diverse configurations
# 2. Measure TCP position with laser tracker (±0.05mm accuracy)
# 3. Record joint angles for each configuration
# 4. Run optimisation to find param_deltas that minimise FK errors
#
# result = least_squares(
#     calibration_residuals, x0=np.zeros(n_joints * 4),
#     args=(joint_configs, measured_positions, nominal_dh),
#     method='lm'  # Levenberg-Marquardt
# )
# calibrated_dh = nominal_dh + result.x.reshape(n_joints, 4)

print("Calibration accuracy benchmarks:")
print("  Uncalibrated robot (nominal DH):     ±1-5mm absolute")
print("  After geometric calibration (DH):    ±0.1-0.5mm absolute")
print("  After full calibration (+ compliance): ±0.05-0.2mm absolute")
print("  Robot repeatability (unchanged):     ±0.02-0.05mm")
print("")
print("Calibration tools:")
print("  Leica Absolute Tracker AT960: ±0.015mm  (gold standard)")
print("  FARO Laser Tracker ION:        ±0.025mm")
print("  Nikon Metrology MV224:         ±0.030mm")
print("  Photogrammetry (cameras):      ±0.1mm   (low cost)")
print("  Robot flange F/T + gauge ball:  ±0.2mm  (in-situ, no tracker)")

✏ Knowledge Test — 10 Questions

Forward kinematics calculations, DH parameters, Jacobian singularities, trajectory profiles, IK solutions, cobot safety standards — real engineering questions.

Question 1 / 10

A 2-DOF planar robot has link lengths L1 = 300 mm and L2 = 200 mm. With joint angles θ1 = 30° and θ2 = 45°, what is the X coordinate of the end-effector?

Forward kinematics for a 2-DOF planar robot: X = L1·cos(θ1) + L2·cos(θ1+θ2). With θ1=30°, θ2=45°: X = 300·cos(30°) + 200·cos(75°) = 300×0.866 + 200×0.259 = 259.8 + 51.8 = 311.6 mm. Y = L1·sin(θ1) + L2·sin(θ1+θ2) = 300×0.5 + 200×0.966 = 150 + 193.2 = 343.2 mm. This cascading cosine/sine sum is the basis of all serial manipulator forward kinematics.
Question 2 / 10

What is a "Denavit-Hartenberg (DH) parameter" and why are there exactly 4 parameters per joint?

Denavit-Hartenberg (1955) proved that the transform between any two consecutive joint frames can be written as a product of exactly 4 elementary transforms: Rot(z, θ) · Trans(z, d) · Trans(x, a) · Rot(x, α). The 4 parameters are: θ (joint angle — variable for revolute joints), d (link offset — variable for prismatic joints), a (link length = distance along x-axis between z-axes), α (link twist = angle between z-axes measured around x-axis). This gives a systematic, unambiguous way to assign coordinate frames and compute the kinematic chain for any open-chain manipulator.
Question 3 / 10

A robot's Jacobian matrix has a determinant of zero at a particular configuration. What does this mean?

The Jacobian J maps joint velocities q̇ to end-effector velocities ẋ: ẋ = J·q̇. To compute required joint velocities for a desired end-effector motion: q̇ = J⁻¹·ẋ. When det(J) = 0, J is singular (non-invertible). Physically: two or more joint axes become aligned, causing one or more end-effector directions to become unreachable. The robot is at a singularity. Types: boundary singularity (arm fully extended or folded), internal singularity (intermediate elbow/wrist alignment). Near singularities, small end-effector motions require huge joint velocities → dangerous. Solutions: singularity avoidance path planning, damped least squares J⁺ = Jᵀ(JJᵀ + λ²I)⁻¹.
Question 4 / 10

What is the difference between "joint space" and "Cartesian space" trajectory planning, and when should you use each?

Joint space trajectory: plan θ(t) directly. Simple, always avoids singularities (if endpoints are non-singular), computationally cheap. Path shape in Cartesian space is unknown and curved. Used for: pick-and-place, palletising, machine tending. Cartesian trajectory: plan position p(t) and orientation R(t) directly, then use inverse kinematics at each step. Guarantees straight-line or arc tool path. Used for: arc welding (straight seam), dispensing, machining, grinding. Risk: if the straight line in Cartesian space passes through a singularity, inverse kinematics fails. Always check workspace coverage and singularity proximity before committing to Cartesian planning.
Question 5 / 10

A 6-DOF robot arm has inverse kinematics with 8 possible solution configurations (elbow up/down, shoulder left/right, wrist flipped). How does the robot controller choose which solution to use?

For a 6-DOF robot (like KUKA KR, ABB IRB, FANUC R series), the closed-form IK typically yields up to 8 solutions from the 3 binary choices: shoulder (lefty/righty), elbow (up/down), wrist (flipped/non-flipped). The controller must: (1) eliminate solutions with joint limit violations, (2) eliminate solutions with self-collision or environment collision, (3) from remaining valid solutions, select the one that minimises a cost (typically closest to current angles = minimum motion). In practice, industrial controllers let programmers specify a "robot configuration" flag that selects a preference. Changing configuration mid-path causes discontinuous joint motion — always plan configuration-consistent paths.
Question 6 / 10

What is the "workspace" of a robot manipulator and what limits it?

Workspace analysis: Reachable workspace = all positions reachable in at least one orientation. Dexterous workspace = positions reachable in all orientations. For a 6-DOF serial robot: reachable workspace ≈ toroidal shell (inner radius = |L1-L2|, outer radius = L1+L2 for the wrist centre). Joint limits carve away sectors. Self-collision removes more. The dexterous workspace is much smaller — typically the region well within the reachable boundary. Workspace voids (holes): internal cavities where the arm cannot reach due to the combination of joint limits and geometry. Always verify robot reach for every target point in the task with margin.
Question 7 / 10

A robot joint rotates from 0° to 90° in 1.0 second using a trapezoidal velocity profile with an acceleration phase of 0.25 s. What is the peak velocity?

Trapezoidal velocity profile (LSPB — Linear Segments with Parabolic Blends): total displacement = v_peak × (T - t_acc) where T = total time and t_acc = acceleration phase duration (assuming symmetric: t_decel = t_acc). Area under velocity profile = displacement: (1/2)×v_peak×t_acc + v_peak×t_const + (1/2)×v_peak×t_decel = Δθ. With t_acc = t_decel = 0.25 s: t_const = 1.0 - 0.25 - 0.25 = 0.5 s. v_peak × (0.25/2 + 0.5 + 0.25/2) = v_peak × 0.75 = 90°. v_peak = 120°/s. Acceleration: a = v_peak / t_acc = 120 / 0.25 = 480°/s². This is the standard trajectory profile used in all industrial robot controllers.
Question 8 / 10

What is "orientation representation" and why is Euler angles problematic for robot control?

Orientation representations: Rotation Matrix R (3×3, 9 numbers, always valid, no singularity, but redundant — only 3 DOF). Euler Angles (3 numbers, compact, intuitive, but gimbal lock at specific configurations — e.g. ZYZ: when θ2=0°, θ1 and θ3 become indistinguishable). Quaternions q=[w,x,y,z] (4 numbers, no singularity, efficient multiplication, smooth interpolation via SLERP — spherical linear interpolation). Axis-angle [n̂, θ] (3+1=4 numbers, intuitive). Industrial practice: store orientations as quaternions or rotation matrices internally; display as Euler angles (RPY or ZYZ) to operators; interpolate in quaternion space for smooth TCP motion. Quaternion SLERP: q(t) = q₀ × (q₀⁻¹q₁)ᵗ gives the shortest-arc rotation between two orientations.
Question 9 / 10

What is the "tool centre point" (TCP) and why must it be calibrated accurately?

TCP (Tool Centre Point) = the control point on the tool, defined as a 6D transform (position + orientation) from the robot flange. Why it matters: all inverse kinematics, path following, speed control, and work object alignment reference the TCP. If TCP is wrong: at a single point, only that point is mispositioned. But as the robot rotates the wrist, the effective tool tip traces a circle — TCP orientation error becomes position error. Calibration methods: 4-point method (touch the same point from 4+ different robot orientations — system solves for the TCP position that minimises variation). XZ-4point: additionally determines TCP orientation. Accuracy: ±0.05 mm achievable with care. Commercial tools: ABB TCPcal, KUKA TCP calibration, URcap.
Question 10 / 10

A collaborative robot (cobot) operates near humans. The ISO/TS 15066 standard defines power and force limiting (PFL). What contact force limits apply to sensitive body regions?

ISO/TS 15066:2016 "Robots and robotic devices — Collaborative robots" defines PFL (Power and Force Limiting) operation requirements. Key table (Table A.2): Maximum quasi-static contact forces per body region — skull 130N / 110 W, face 65N / 65 W, neck 145N / 95 W, sternum 105N / 25 W, abdomen 110N / 75 W, hand/finger 140N / 70 W (some regions have lower transient vs. quasi-static limits). Robot suppliers (Universal Robots, FANUC CR, KUKA LBR, Doosan) provide validated safety functions and safety-rated force monitoring. System integrators must perform a risk assessment per ISO 10218-2 even with inherently safe cobots — the tool and workpiece can create pinch points exceeding these limits.

Tutorial complete

Ready for more?