Roboter
Robot kinematics is the mathematical foundation of every robot in the world — from a simple two-joint planar arm to a 7-DOF collaborative robot performing sub-millimetre surgery. It answers two fundamental questions: given joint angles, where is the end-effector in space (forward kinematics)? And given a desired position in space, what joint angles produce it (inverse kinematics)? This guide takes you from the geometry of a single joint through homogeneous transforms, Denavit-Hartenberg parameters, Jacobian velocity control, singularities, and trajectory planning — from absolute beginner to engineering-grade understanding.
Before a robot can move, we must establish a mathematical language for describing positions and orientations in 3D space. This is not abstract mathematics — every number in a robot program ultimately refers to a coordinate frame, a position vector, or a rotation matrix. Understanding these foundations makes every robot concept that follows completely logical rather than mysterious.
Drag the sliders to move and rotate the child frame (orange) relative to the world frame (white). See how the same point looks different in each frame — this is the core of all robot kinematics.
Apply Roll, Pitch, Yaw rotations to the unit vectors. Observe how the order of rotations matters — this is non-commutativity. The coloured axes always remain orthogonal (rotation matrix property).
A coordinate frame (or reference frame) is a set of three mutually perpendicular axes — X, Y, and Z — anchored to a specific point in space (the origin). Every measurement in robotics is made relative to a frame. The right-hand rule: point your right-hand fingers along X, curl them toward Y — your thumb points along Z. This is universal across all robot manufacturers. In industrial robotics, the world frame (or base frame) is fixed to the robot base. The tool frame is fixed to the end-effector. The work object frame is fixed to the part being processed. A robot's job is to move the tool frame to desired positions and orientations relative to the work object frame — everything else is mathematics to accomplish this.
// Coordinate frame fundamentals
//
// A frame is defined by:
// - An origin point (x, y, z position in some parent frame)
// - Three unit vectors (x-axis, y-axis, z-axis directions)
//
// Representation as 4×4 homogeneous transformation matrix:
//
// [ R | p ] R = 3×3 rotation matrix (orientation)
// T = [----+---] p = 3×1 position vector
// [ 0 | 1 ] 0 = 1×3 zero row
//
// Example: frame rotated 30° around Z, translated [100, 50, 200] mm
//
// [ cos30 -sin30 0 | 100 ] [ 0.866 -0.500 0 | 100 ]
// T = [ sin30 cos30 0 | 50 ] = [ 0.500 0.866 0 | 50 ]
// [ 0 0 1 | 200 ] [ 0 0 1 | 200 ]
// [ 0 0 0 | 1 ] [ 0 0 0 | 1]
//
// To find a point P expressed in frame A:
// P_world = T_world_A × P_A
//
// To chain multiple frames:
// T_world_tool = T_world_base × T_base_link1 × T_link1_link2 × ... × T_flange_tool
//
// Key identity: T_A_B × T_B_A = Identity (inverse transform)
// T_B_A = T_A_B^(-1)
// For rotation-only: T^(-1) = T^T (transpose)
// For general: T^(-1) = [R^T | -R^T·p]
// [ 0 | 1 ]
A rotation matrix R is a 3×3 matrix that rotates a vector from one frame into another. It has three beautiful properties: (1) every column is a unit vector (|col| = 1), (2) all columns are orthogonal to each other (colᵢ · colⱼ = 0 for i≠j), (3) det(R) = +1 (not -1, which would be a reflection). These properties mean R has only 3 independent parameters — the 3 degrees of freedom of rotation — even though it contains 9 numbers. The three elementary rotations around X, Y, and Z axes are the building blocks. Any orientation in space can be reached by composing these three. The order of composition matters: rotations are not commutative (R₁R₂ ≠ R₂R₁ in general). This surprises beginners but becomes intuitive with practice.
// Elementary rotation matrices
import numpy as np
def Rx(theta_deg):
"""Rotation by theta degrees around X axis"""
t = np.radians(theta_deg)
return np.array([
[1, 0, 0],
[0, np.cos(t), -np.sin(t)],
[0, np.sin(t), np.cos(t)]
])
def Ry(theta_deg):
"""Rotation by theta degrees around Y axis"""
t = np.radians(theta_deg)
return np.array([
[ np.cos(t), 0, np.sin(t)],
[ 0, 1, 0],
[-np.sin(t), 0, np.cos(t)]
])
def Rz(theta_deg):
"""Rotation by theta degrees around Z axis"""
t = np.radians(theta_deg)
return np.array([
[np.cos(t), -np.sin(t), 0],
[np.sin(t), np.cos(t), 0],
[ 0, 0, 1]
])
# Example: ZYX Euler angles (Roll-Pitch-Yaw)
# Roll=10°, Pitch=20°, Yaw=30°
# Applied in order: first Z (yaw), then Y (pitch), then X (roll)
R_total = Rz(30) @ Ry(20) @ Rx(10)
print("Combined rotation matrix:")
print(np.round(R_total, 4))
# To rotate a vector:
v = np.array([1, 0, 0]) # X-axis unit vector
v_rotated = R_total @ v
print(f"Rotated vector: {np.round(v_rotated, 4)}")
# Verify orthonormality:
print(f"det(R) = {np.linalg.det(R_total):.6f}") # Must be +1.0
print(f"R @ R.T = I? {np.allclose(R_total @ R_total.T, np.eye(3))}")
The 4×4 homogeneous transformation matrix is the single most important mathematical object in robotics. It combines rotation (3×3 top-left) and translation (3×1 top-right) into one matrix, enabling a single matrix multiplication to both rotate and translate a point simultaneously. This elegance is why it is universally used. Chaining transforms: to express frame C relative to frame A, knowing frame B relative to A and frame C relative to B: T_A_C = T_A_B × T_B_C. This chain is the entire mathematical basis of forward kinematics — each joint adds one transform, and multiplying them all together gives the end-effector position and orientation relative to the base. The inverse transform lets you go the other direction: T_B_A = T_A_B⁻¹.
// Homogeneous transforms — complete Python library
import numpy as np
def make_transform(R, p):
"""Build 4x4 T from 3x3 R and 3x1 position p"""
T = np.eye(4)
T[:3, :3] = R
T[:3, 3] = p
return T
def T_from_RPY_xyz(roll, pitch, yaw, x, y, z):
"""Build transform from Roll-Pitch-Yaw (degrees) + translation (mm)"""
R = Rz(yaw) @ Ry(pitch) @ Rx(roll) # ZYX convention
return make_transform(R, [x, y, z])
def transform_point(T, point):
"""Transform a 3D point by a 4x4 matrix"""
p_h = np.array([*point, 1.0]) # Homogeneous coordinates
return (T @ p_h)[:3] # Back to 3D
def invert_transform(T):
"""Efficient inverse of a rigid-body transform"""
R = T[:3, :3]
p = T[:3, 3]
T_inv = np.eye(4)
T_inv[:3, :3] = R.T
T_inv[:3, 3] = -R.T @ p
return T_inv
# === REAL EXAMPLE: 2-link robot arm ===
# Robot base at world origin, facing +X
# Joint 1 rotates around Z (vertical axis), θ1 = 45°
# Link 1 length = 300mm along X after rotation
# Joint 2 also rotates around Z (elbow), θ2 = -30°
# Link 2 length = 200mm
L1, L2 = 300, 200
th1, th2 = 45, -30 # degrees
# Transform: base → joint 1
T_base_j1 = make_transform(Rz(th1), [0, 0, 0])
# Transform: joint 1 → joint 2 (link 1 along X)
T_j1_j2 = make_transform(Rz(th2), [L1, 0, 0])
# Transform: joint 2 → end-effector (link 2 along X)
T_j2_ee = make_transform(np.eye(3), [L2, 0, 0])
# Full forward kinematics
T_ee = T_base_j1 @ T_j1_j2 @ T_j2_ee
print("End-effector position:")
print(f" X = {T_ee[0,3]:.1f} mm")
print(f" Y = {T_ee[1,3]:.1f} mm")
print(f" Z = {T_ee[2,3]:.1f} mm")
print(f"End-effector orientation (rotation matrix):")
print(np.round(T_ee[:3,:3], 3))
Forward kinematics (FK) answers: "given the angle of every joint, where is the end-effector?" It is always solvable — there is exactly one answer for a given set of joint angles. For any serial robot, FK is a chain of transforms, one per joint, multiplied together. The Denavit-Hartenberg convention provides a systematic, unambiguous way to assign coordinate frames and write those transforms. FK is computed hundreds or thousands of times per second inside a robot controller.
Drag the joint angle sliders to move the robot. Watch the end-effector trace a path. Observe how each joint's contribution compounds through the kinematic chain. Try to reach the target point (red dot).
| Robot | DOF | Joint 1 α | Joint 2 α | Reach | Payload | IK solutions |
|---|---|---|---|---|---|---|
| PUMA-560 (classic) | 6 | -90° | 0° | 650mm | 2.5 kg | 8 |
| KUKA KR 6 R700 | 6 | -90° | 0° | 706mm | 6 kg | 8 |
| ABB IRB 1200 | 6 | -90° | 0° | 901mm | 7 kg | 8 |
| Universal Robots UR5e | 6 | 90° | 0° | 850mm | 5 kg | 16* |
| FANUC M-20iD | 6 | -90° | 0° | 1813mm | 20 kg | 8 |
| KUKA LBR iiwa 7 | 7 | -90° | 90° | 800mm | 7 kg | ∞ (redundant) |
* UR robots have non-spherical wrist geometry — more IK solutions possible.
A 3-DOF planar robot (three revolute joints, all Z-axes parallel and pointing up) is the perfect teaching example. All motion stays in the XY plane, so the Z coordinate is always zero and orientation is just a single angle — the sum of all joint angles. Despite this simplicity, it captures every concept of forward kinematics. The end-effector position is the sum of each link's contribution: X = L₁cos(θ₁) + L₂cos(θ₁+θ₂) + L₃cos(θ₁+θ₂+θ₃). Physically: each term is the projection of one link onto the X-axis, taking into account all the rotations that came before it. The angle (θ₁+θ₂+θ₃) is the cumulative rotation — the absolute angle of the last link in the world frame.
# Forward kinematics: 3-DOF planar robot
import numpy as np
def fk_3dof_planar(L1, L2, L3, th1_deg, th2_deg, th3_deg):
"""
Closed-form FK for 3-DOF planar robot.
All angles in degrees, link lengths in mm.
Returns (x, y, phi) where phi = end-effector orientation in world frame.
"""
th1 = np.radians(th1_deg)
th2 = np.radians(th2_deg)
th3 = np.radians(th3_deg)
# Cumulative angles
a12 = th1 + th2 # absolute angle after joint 2
a123 = th1 + th2 + th3 # absolute angle after joint 3
# End-effector position
x = L1*np.cos(th1) + L2*np.cos(a12) + L3*np.cos(a123)
y = L1*np.sin(th1) + L2*np.sin(a12) + L3*np.sin(a123)
# End-effector orientation (angle of last link in world frame)
phi = np.degrees(a123)
# Intermediate joint positions (for visualisation)
j1_x = L1*np.cos(th1)
j1_y = L1*np.sin(th1)
j2_x = j1_x + L2*np.cos(a12)
j2_y = j1_y + L2*np.sin(a12)
return x, y, phi, (j1_x, j1_y), (j2_x, j2_y)
# ── Test cases ──
L1, L2, L3 = 300, 200, 100 # mm
configs = [
(0, 0, 0, 'Fully extended'),
(0, 90, 0, 'Elbow up 90°'),
(45, -45, 0, 'θ1=45°, θ2=-45°'),
(90, 90, 90, 'Triple 90° — pointing back'),
(30, 45, -30, 'Typical working pose'),
]
for th1, th2, th3, name in configs:
x, y, phi, j1, j2 = fk_3dof_planar(L1, L2, L3, th1, th2, th3)
reach = np.sqrt(x**2 + y**2)
print(f"{name}:")
print(f" TCP: ({x:.1f}, {y:.1f}) mm, φ={phi:.1f}°")
print(f" Distance from base: {reach:.1f} mm (max={L1+L2+L3} mm)")
The workspace of a robot is the set of all positions its end-effector can reach. For a 2-DOF planar robot with links L₁ and L₂, the reachable workspace is an annulus (ring): outer radius = L₁ + L₂ (both joints fully extended), inner radius = |L₁ - L₂| (one joint folded completely back). Joint angle limits cut away portions of this ring — if joint 2 can only go from -150° to +150°, the inner exclusion zone changes. For 3D robots (6-DOF), the workspace becomes a complex 3D shape. Dexterous workspace — where every orientation is achievable — is much smaller. Boundary singularities occur at the outer and inner workspace boundaries (arm fully extended or fully folded). These are the configurations to avoid in motion planning.
# Workspace boundary analysis — 2-DOF planar robot
import numpy as np
import matplotlib.pyplot as plt
L1, L2 = 300, 200 # mm
# Joint limits
th1_min, th1_max = -170, 170 # degrees
th2_min, th2_max = -150, 150 # degrees
# Sample configurations and compute FK
reachable_pts = []
for th1 in np.linspace(th1_min, th1_max, 180):
for th2 in np.linspace(th2_min, th2_max, 180):
th1_r = np.radians(th1)
th2_r = np.radians(th2)
a12 = th1_r + th2_r
x = L1*np.cos(th1_r) + L2*np.cos(a12)
y = L1*np.sin(th1_r) + L2*np.sin(a12)
reachable_pts.append((x, y))
# Plot the workspace
px = [p[0] for p in reachable_pts]
py = [p[1] for p in reachable_pts]
plt.figure(figsize=(8, 8))
plt.scatter(px, py, s=0.5, alpha=0.3, c='#F97316')
plt.axis('equal')
plt.grid(True, alpha=0.3)
plt.title('2-DOF Planar Robot Workspace')
plt.xlabel('X (mm)')
plt.ylabel('Y (mm)')
# Theoretical boundaries
theta = np.linspace(0, 2*np.pi, 360)
plt.plot((L1+L2)*np.cos(theta), (L1+L2)*np.sin(theta), 'w--', label=f'Outer: {L1+L2}mm')
plt.plot(abs(L1-L2)*np.cos(theta), abs(L1-L2)*np.sin(theta), 'r--', label=f'Inner: {abs(L1-L2)}mm')
plt.legend()
plt.savefig('workspace.png', dpi=150, bbox_inches='tight')
# Key workspace metrics:
print(f"Max reach: {L1+L2} mm")
print(f"Min reach (with limits): {abs(L1-L2)} mm")
print(f"Workspace area estimate: {len(reachable_pts)} sampled points")
Inverse kinematics (IK) answers: "given a desired position (and orientation) for the end-effector, what joint angles must I set?" Unlike forward kinematics, IK may have zero, one, or many solutions. It is the hard problem of robotics. Every time a robot moves to a point you commanded — whether in a teach pendant, a CAM program, or a real-time vision-guided pick — the controller is solving IK. Understanding IK explains why robots sometimes cannot reach a point, why they take unexpected paths, and how to design robot workcells for maximum reachability.
Click anywhere in the canvas to set the target position. The IK solver computes both elbow-up and elbow-down solutions instantly. Observe how both solutions are geometrically valid. Try clicking outside the reachable workspace.
The 2-DOF planar robot IK is solved geometrically and teaches the fundamental concepts of all IK. Given a target position (Px, Py), we need joint angles θ₁ and θ₂. The distance from base to target is r = √(Px² + Py²). This distance must be between |L₁-L₂| and L₁+L₂ — otherwise no solution exists (target out of workspace). The law of cosines gives the elbow angle directly: cos(θ₂) = (r² - L₁² - L₂²) / (2·L₁·L₂). The ± sign of the square root gives two solutions: "elbow up" and "elbow down" — the classic dual solution of all articulated robots. Then θ₁ is found using atan2. This two-solution geometry persists in 6-DOF robots as the "shoulder" and "elbow" configuration choices.
# Inverse kinematics: 2-DOF planar robot
import numpy as np
def ik_2dof_planar(Px, Py, L1, L2):
"""
Closed-form IK for 2-DOF planar robot.
Returns (th1_up, th2_up), (th1_down, th2_down) or None if out of reach.
"""
r2 = Px**2 + Py**2
r = np.sqrt(r2)
# --- Reachability check ---
if r > L1 + L2:
print(f"UNREACHABLE: target {r:.1f}mm exceeds max reach {L1+L2}mm")
return None, None
if r < abs(L1 - L2):
print(f"UNREACHABLE: target {r:.1f}mm inside minimum reach {abs(L1-L2)}mm")
return None, None
# --- Elbow angle from law of cosines ---
# cos(θ2) = (r² - L1² - L2²) / (2·L1·L2)
cos_th2 = (r2 - L1**2 - L2**2) / (2 * L1 * L2)
cos_th2 = np.clip(cos_th2, -1, 1) # Numerical safety
# Two solutions: + = elbow down, - = elbow up
th2_down = np.arccos(cos_th2)
th2_up = -np.arccos(cos_th2)
# --- Shoulder angle using atan2 ---
# th1 = atan2(Py, Px) - atan2(L2·sin(th2), L1 + L2·cos(th2))
def solve_th1(th2):
beta = np.arctan2(L2 * np.sin(th2), L1 + L2 * np.cos(th2))
return np.arctan2(Py, Px) - beta
th1_down = solve_th1(th2_down)
th1_up = solve_th1(th2_up)
return (
(np.degrees(th1_down), np.degrees(th2_down)),
(np.degrees(th1_up), np.degrees(th2_up))
)
# Example
L1, L2 = 300, 200
Px, Py = 250, 300
sol_down, sol_up = ik_2dof_planar(Px, Py, L1, L2)
if sol_down:
th1_d, th2_d = sol_down
th1_u, th2_u = sol_up
print(f"Target: ({Px}, {Py}) mm")
print(f"Elbow DOWN: θ1={th1_d:.2f}°, θ2={th2_d:.2f}°")
print(f"Elbow UP: θ1={th1_u:.2f}°, θ2={th2_u:.2f}°")
# Verify by FK
for th1, th2 in [sol_down, sol_up]:
th1_r, th2_r = np.radians(th1), np.radians(th2)
x = L1*np.cos(th1_r) + L2*np.cos(th1_r+th2_r)
y = L1*np.sin(th1_r) + L2*np.sin(th1_r+th2_r)
err = np.sqrt((x-Px)**2 + (y-Py)**2)
print(f" FK verify: ({x:.3f}, {y:.3f}) error={err:.6f}mm")
For 6-DOF robots, the IK problem involves 6 equations (position + orientation) with 6 unknowns (joint angles). Analytical (closed-form) solutions exist for robots where the last 3 joint axes intersect at a common point — the "wrist centre". This is the standard spherical wrist design used in almost all industrial robots (KUKA, ABB, FANUC, Yaskawa). Pieper's method (1968) decouples the position and orientation problems: (1) Find the wrist centre position from the desired TCP pose. (2) Solve the first 3 joints to position the wrist centre (geometric or algebraic). (3) Solve the last 3 joints to orient the end-effector (spherical wrist Euler angle solution). Each step yields 2 solutions, giving up to 2×2×2 = 8 total solutions.
# Analytical IK — 6-DOF robot with spherical wrist
# Conceptual implementation (simplified PUMA-type)
import numpy as np
def ik_6dof_spherical_wrist(T_desired, L1, L2, d4, d6, config='ULN'):
"""
6-DOF IK with spherical wrist (Pieper's method).
T_desired: 4x4 target TCP transform
L1, L2: upper arm / forearm lengths
d4: wrist offset along z
d6: tool length from wrist to TCP
config: 'ULN' = Up/Left/No-flip, 'DRF' = Down/Right/Flip
Returns list of (th1..th6) solutions in degrees.
"""
R = T_desired[:3, :3]
p = T_desired[:3, 3]
# ── STEP 1: Wrist centre ──────────────────────────────
# Subtract tool offset along end-effector Z-axis
# Wrist centre = TCP position - d6 × (last column of R)
z_ee = R[:, 2] # End-effector Z-axis in world frame
p_wc = p - d6 * z_ee
# ── STEP 2: Solve θ1 from wrist centre XY ───────────
# Shoulder rotation: atan2(Y_wc, X_wc)
th1_A = np.arctan2(p_wc[1], p_wc[0])
th1_B = th1_A + np.pi # Second shoulder solution
solutions = []
for th1 in [th1_A, th1_B]:
c1, s1 = np.cos(th1), np.sin(th1)
# Wrist centre in shoulder frame
r_wc = np.sqrt(p_wc[0]**2 + p_wc[1]**2)
z_wc = p_wc[2]
# ── STEP 3: Solve θ2, θ3 from wrist position ───
d = np.sqrt(r_wc**2 + z_wc**2)
if d > L1 + L2 or d < abs(L1 - L2):
continue # Wrist unreachable
cos_th3 = (d**2 - L1**2 - L2**2) / (2 * L1 * L2)
cos_th3 = np.clip(cos_th3, -1, 1)
for elbow_sign in [1, -1]: # Up / Down
th3 = elbow_sign * np.arccos(cos_th3)
alpha = np.arctan2(z_wc, r_wc)
beta = np.arctan2(L2*np.sin(th3), L1 + L2*np.cos(th3))
th2 = alpha - beta
# ── STEP 4: Solve θ4, θ5, θ6 from orientation ─
# R_03 = Rz(th1)·Ry(th2)·Ry(th3) [simplified]
# R_36 = R_03^T · R_desired
# Extract ZYZ Euler angles from R_36 for th4, th5, th6
# (implementation depends on specific robot DH)
# Placeholder — robot-specific
th4, th5, th6 = 0, 0, 0 # Must solve per robot
solutions.append(
tuple(np.degrees([th1, th2, th3, th4, th5, th6]))
)
return solutions
# Note: Real implementations use robot-specific DH tables.
# Industry libraries: RoboDK, KUKA.Sim, ABB RobotStudio,
# PyKDL, ikfast (OpenRAVE), TracIK (ROS)
When analytical IK is not available (redundant robots with more than 6-DOF, or non-standard geometry), numerical methods iteratively find joint angles that minimise the error between current and desired TCP pose. The Jacobian transpose method and pseudoinverse method are the foundations. At each iteration: compute current FK, compute Cartesian error, multiply by J⁺ (pseudoinverse) to get joint correction Δq, update joints q = q + α·Δq, repeat until error < tolerance. Convergence is not guaranteed near singularities. The damped least squares (DLS) method adds a damping factor λ: J⁺ = Jᵀ(JJᵀ + λ²I)⁻¹ — this prevents joint velocity explosion near singularities at the cost of slightly non-exact positioning. λ is often made variable: large near singularities, near-zero otherwise.
# Numerical IK — Jacobian-based iterative solver
import numpy as np
def numerical_ik(
fk_func, # Forward kinematics function: angles → T (4×4)
jacobian_func, # Jacobian function: angles → J (6×n)
target_T, # Desired 4×4 end-effector transform
q_init, # Initial joint angles (degrees)
max_iter=100,
alpha=0.5, # Step size
lambda_dls=0.01, # Damping factor
tol_pos=0.1, # Position tolerance (mm)
tol_ori=0.01 # Orientation tolerance (rad)
):
"""
Damped Least Squares (Levenberg-Marquardt) IK solver.
Works for any robot with known FK and Jacobian.
"""
q = np.array(q_init, dtype=float)
target_p = target_T[:3, 3]
target_R = target_T[:3, :3]
for iteration in range(max_iter):
# Current FK
T_curr = fk_func(q)
curr_p = T_curr[:3, 3]
curr_R = T_curr[:3, :3]
# Position error
ep = target_p - curr_p # 3×1
# Orientation error (axis-angle representation)
R_err = target_R @ curr_R.T
# Skew-symmetric part gives rotation vector
eo = 0.5 * np.array([
R_err[2,1] - R_err[1,2],
R_err[0,2] - R_err[2,0],
R_err[1,0] - R_err[0,1]
])
# Combined error vector [pos_err; ori_err]
e = np.concatenate([ep, eo]) # 6×1
# Convergence check
if np.linalg.norm(ep) < tol_pos and np.linalg.norm(eo) < tol_ori:
print(f"Converged at iteration {iteration}")
print(f" Position error: {np.linalg.norm(ep):.4f} mm")
print(f" Orientation error: {np.linalg.norm(eo):.6f} rad")
return q, True
# Jacobian at current configuration
J = jacobian_func(q) # 6×n
# Damped Least Squares pseudo-inverse
# J+ = J^T (J J^T + λ²I)^(-1)
n = J.shape[1]
JJT = J @ J.T
J_dls = J.T @ np.linalg.inv(JJT + lambda_dls**2 * np.eye(6))
# Joint angle update
dq = alpha * J_dls @ e
q = q + dq
if iteration % 10 == 0:
print(f"Iter {iteration:3d}: pos_err={np.linalg.norm(ep):.2f}mm")
print(f"WARNING: Did not converge after {max_iter} iterations")
return q, False
The Jacobian is the matrix that maps joint velocities to end-effector velocities. It is the velocity-level equivalent of forward kinematics: ẋ = J(q)·q̇. But the Jacobian is far more than a velocity tool — it reveals singularities (configurations where the robot loses a degree of freedom), maps forces between joint space and task space, enables real-time motion control, and is the foundation of all modern impedance and force control. Understanding the Jacobian deeply separates a robotics engineer from someone who just programs waypoints.
The manipulability ellipse shows how easily the end-effector can move in each direction. A large, circular ellipse = well-conditioned (far from singularity). A flat, needle-shaped ellipse = near singularity. The condition number is the ratio of the largest to smallest semi-axis. Try stretching the arm fully (θ₂→0° or ±180°) to see the singularity collapse.
The geometric Jacobian has one column per joint. For revolute joint i: the column is the cross product of the joint's z-axis with the vector from joint i to the end-effector (for the translational part), and simply the joint's z-axis (for the rotational part). For prismatic joint i: the translational part is the joint's z-axis (the direction of linear motion), and the rotational part is zero (prismatic joints don't rotate the end-effector). This geometric interpretation makes the Jacobian intuitive: each column tells you "how much does the end-effector move when this joint moves by one unit?" The column for a joint far from the end-effector has a large translational component; a joint right next to the end-effector contributes mainly rotation.
# Geometric Jacobian construction
import numpy as np
def geometric_jacobian(joint_angles, dh_params, joint_types):
"""
Compute geometric Jacobian for a serial manipulator.
joint_types: list of 'R' (revolute) or 'P' (prismatic)
Returns J (6 × n_joints)
"""
n = len(joint_angles)
J = np.zeros((6, n))
# First, compute all forward transforms up to each joint
# and the final end-effector position
transforms = [np.eye(4)] # T_0_0 = identity
T = np.eye(4)
for i, (angle, params, jtype) in enumerate(zip(joint_angles, dh_params, joint_types)):
Ti = dh_transform(angle + params[0], params[1], params[2], params[3])
T = T @ Ti
transforms.append(T.copy())
# End-effector position
p_ee = transforms[-1][:3, 3]
# Build Jacobian column by column
for i in range(n):
Ti = transforms[i] # Transform up to joint i
z_i = Ti[:3, 2] # Z-axis of frame i (joint axis)
p_i = Ti[:3, 3] # Origin of frame i
if joint_types[i] == 'R': # Revolute joint
# Translational part: z_i × (p_ee - p_i)
J[:3, i] = np.cross(z_i, p_ee - p_i)
# Rotational part: z_i
J[3:, i] = z_i
else: # Prismatic joint
# Translational part: z_i (direction of motion)
J[:3, i] = z_i
# Rotational part: 0 (no rotation)
J[3:, i] = 0
return J
# Example: 2-DOF planar robot Jacobian
L1, L2 = 300, 200
th1, th2 = np.radians(30), np.radians(45)
# Manual closed-form Jacobian for 2-DOF planar:
# J = [-L1·s1 - L2·s12, -L2·s12]
# [ L1·c1 + L2·c12, L2·c12]
# [ 1, 1 ] (for 3-DOF: add rotational row)
a12 = th1 + th2
c1, s1 = np.cos(th1), np.sin(th1)
c12, s12 = np.cos(a12), np.sin(a12)
J_2dof = np.array([
[-L1*s1 - L2*s12, -L2*s12],
[ L1*c1 + L2*c12, L2*c12]
])
print("Jacobian at θ1=30°, θ2=45°:")
print(np.round(J_2dof, 1))
# Condition number — measure of how far from singularity
cond = np.linalg.cond(J_2dof)
print(f"Condition number: {cond:.2f}")
print(f" < 10: well-conditioned, >> 100: near singularity")
# Determinant (2×2 case)
det = np.linalg.det(J_2dof)
print(f"det(J) = {det:.1f} {'(non-singular)' if abs(det) > 1 else '(SINGULAR!)'}")
A kinematic singularity is a configuration where the Jacobian loses rank — its determinant becomes zero. Physically, the robot loses one or more degrees of freedom: the end-effector cannot move in at least one direction regardless of what the joints do. Singularities are not bugs — they are a fundamental geometric property of every non-trivial serial manipulator. There are three types for 6-DOF robots with spherical wrists: (1) Boundary singularities: arm fully extended or fully folded — wrist centre on the outer or inner workspace boundary. (2) Shoulder singularity: joints 1 and 6 become collinear. (3) Wrist singularity (the most common): joint 5 angle = 0° — joints 4 and 6 become collinear and only their sum can be determined, not each individually. Detection: compute det(J) or the minimum singular value of J. Near singularity: use damped least squares instead of J⁻¹.
# Singularity detection and damped least squares avoidance
import numpy as np
def singularity_measure(J):
"""
Multiple measures of distance from singularity.
J: m × n Jacobian matrix
"""
# Singular Value Decomposition
U, sigma, Vt = np.linalg.svd(J)
min_sv = sigma[-1] # Smallest singular value → 0 at singularity
max_sv = sigma[0] # Largest singular value
cond = max_sv / (min_sv + 1e-12) # Condition number
manip = np.prod(sigma) # Yoshikawa manipulability measure
# manip = sqrt(det(J @ J.T)) — proportional to volume of manipulability ellipsoid
return {
'min_singular_value': min_sv,
'condition_number': cond,
'manipulability': manip,
'singular_values': sigma,
'is_singular': min_sv < 1e-3,
'near_singular': min_sv < 5.0 # threshold is application-dependent
}
def damped_least_squares_ik_step(J, e, lam_min=0.0, lam_max=0.05, sv_threshold=5.0):
"""
One step of damped least squares (DLS) IK.
Variable damping: large λ near singularity, small λ far away.
J: 6×n Jacobian
e: 6×1 error vector
Returns joint velocity correction dq.
"""
U, sigma, Vt = np.linalg.svd(J)
# Variable damping based on smallest singular value
min_sv = sigma[-1]
if min_sv < sv_threshold:
# Ramp λ from lam_min to lam_max as singularity approaches
lam = lam_max * (1 - min_sv / sv_threshold)**2
else:
lam = lam_min
# DLS pseudo-inverse with variable damping
sigma_damped = sigma / (sigma**2 + lam**2)
# Apply damped pseudo-inverse
dq = Vt.T @ np.diag(sigma_damped) @ U.T @ e
return dq, lam
# ── Wrist singularity example ──
# When θ5 = 0° (or 180°), joints 4 and 6 are aligned
# Only the sum (θ4 + θ6) is determined by the desired orientation
# The controller must handle this with null-space projection or lock one joint
print("Singularity avoidance strategies:")
print("1. Path planning: avoid configurations where min_sv < threshold")
print("2. DLS: use damped inverse, accept slight position error near singular")
print("3. Configuration monitoring: alert operator, stop or slow motion")
print("4. Null-space motion: for redundant robots (7-DOF+), use extra DOF")
print(" to maintain manipulability while tracking the desired path")
The Jacobian transpose maps forces and torques between Cartesian space and joint space. The principle of virtual work: if a Cartesian force/moment F acts on the end-effector, the equivalent joint torques are τ = Jᵀ·F. This is the dual of velocity kinematics (ẋ = J·q̇ ↔ τ = Jᵀ·F). This relationship is fundamental for force control: force sensors at the wrist measure F, and the controller commands joint torques τ = Jᵀ·F to produce compliant behavior. Impedance control combines position and force control: the robot behaves like a spring-damper system in Cartesian space. Near singularities, the Jacobian transpose becomes ill-conditioned: small joint torques produce large Cartesian forces (amplification), which is dangerous.
# Jacobian transpose force/torque mapping
import numpy as np
# Scenario: robot holding a workpiece against a surface
# A 50 N contact force acts on the TCP in the -Z direction
# What joint torques does this require?
# Example Jacobian (6×6 for 6-DOF robot at some configuration)
# In practice, compute from robot geometry
J = np.array([
[ 0.00, -400.0, -200.0, 0.00, -150.0, 0.00],
[350.0, 0.0, 0.0, 50.00, 0.0, 30.00],
[ 0.0, 350.0, 350.0, 0.00, 50.0, 0.00],
[ 0.0, 0.0, 0.0, 1.00, 0.0, 0.00],
[ 0.0, 1.0, 1.0, 0.00, 1.0, 0.00],
[ 1.0, 0.0, 0.0, 0.00, 0.0, 1.00]
]) # mm and radians
# Cartesian wrench: force [Fx, Fy, Fz] and moment [Mx, My, Mz]
F_contact = np.array([0, 0, -50, 0, 0, 0]) # 50N in -Z direction
# Joint torques via Jacobian transpose
# τ = J^T · F
# (convert J from mm to m for N·m units: divide position part by 1000)
J_m = J.copy()
J_m[:3, :] /= 1000 # Convert mm → m for translational rows
tau = J_m.T @ F_contact
print("Contact force: 50N in -Z direction")
print("Required joint torques:")
for i, t in enumerate(tau):
print(f" Joint {i+1}: {t:+.2f} N·m")
# Force ellipsoid analysis
# The force ellipsoid describes which directions the robot can
# apply/resist forces most effectively
JJT = J_m[:3,:] @ J_m[:3,:].T # Position part only
eigenvalues = np.linalg.eigvalsh(JJT)
print(f"\nForce transmission ratio:")
print(f" Best direction: {np.sqrt(eigenvalues[-1]):.2f} N per N·m")
print(f" Worst direction: {np.sqrt(eigenvalues[0]):.2f} N per N·m")
print(f" Isotropy: {np.sqrt(eigenvalues[0]/eigenvalues[-1]):.3f}")
print(f" (1.0 = perfectly isotropic, 0 = at singularity)")
A trajectory defines how the robot moves from one configuration to another over time — not just where to go, but how to get there. A trajectory has three components: geometry (the path shape in space), timing (when to be where along the path), and constraints (velocity, acceleration, jerk limits, and obstacle avoidance). Good trajectory planning makes robots fast without jerking parts loose, smooth without overshooting, and safe without triggering joint limit faults. Poor trajectory planning is the single most common cause of production downtime in robot cells.
Compare trapezoidal (LSPB), cubic polynomial, and quintic polynomial profiles for the same move. The position, velocity, and acceleration curves are shown. Observe how quintic produces zero velocity AND acceleration at start/end — the smoothest possible motion. The area under the velocity curve always equals the total displacement.
Simulate a pick-and-place sequence. Toggle blend zones (z50 vs fine) and observe the effect on total cycle time. The timeline shows each move segment. This demonstrates why 'fine' stop at every waypoint is the #1 cause of slow robot cells.
Joint space trajectories interpolate joint angles directly. The simplest: linear interpolation (constant velocity) — but this produces infinite acceleration at the start and end. Better: trapezoidal velocity profile (LSPB — Linear Segments with Parabolic Blends): constant acceleration phase, constant velocity phase, constant deceleration phase. Even better: cubic polynomial: q(t) = a₀ + a₁t + a₂t² + a₃t³, matching position and velocity at start and end (4 constraints, 4 coefficients). Best: quintic polynomial or minimum-jerk profiles that also constrain acceleration at endpoints, giving the smoothest possible motion. In practice, controllers use trapezoidal or S-curve (jerk-limited trapezoidal) for the vast majority of moves.
# Trajectory profile generators
import numpy as np
def trapezoid_profile(q_start, q_end, v_max, a_max, dt=0.001):
"""
Trapezoidal velocity profile (LSPB).
Returns time and joint position arrays.
v_max: maximum velocity (deg/s)
a_max: maximum acceleration (deg/s²)
dt: time step (seconds)
"""
delta_q = q_end - q_start
sign = np.sign(delta_q)
dq = abs(delta_q)
# Time to reach v_max from zero
t_acc = v_max / a_max
# Distance covered during acceleration
d_acc = 0.5 * a_max * t_acc**2
if 2 * d_acc > dq:
# Triangle profile (v_max never reached)
v_peak = np.sqrt(a_max * dq)
t_acc = v_peak / a_max
t_total = 2 * t_acc
t_const = 0
else:
# Full trapezoidal
v_peak = v_max
d_const = dq - 2 * d_acc
t_const = d_const / v_max
t_total = 2 * t_acc + t_const
t = np.arange(0, t_total + dt, dt)
q = np.zeros_like(t)
for i, ti in enumerate(t):
if ti <= t_acc:
# Acceleration phase
q[i] = 0.5 * a_max * ti**2
elif ti <= t_acc + t_const:
# Constant velocity
q[i] = d_acc + v_peak * (ti - t_acc)
else:
# Deceleration
td = ti - t_acc - t_const
q[i] = d_acc + v_peak * t_const + v_peak * td - 0.5 * a_max * td**2
return t, q_start + sign * np.clip(q, 0, dq)
def quintic_poly(q0, qf, qd0, qdf, qdd0, qddf, T):
"""
Quintic polynomial trajectory.
Matches position, velocity, and acceleration at start and end.
Returns coefficients [a0..a5] where q(t) = sum(ai * t^i)
"""
# 6 boundary conditions → 6 coefficients
# [1 0 0 0 0 0 ] [a0] [q0 ]
# [0 1 0 0 0 0 ] [a1] [qd0 ]
# [0 0 2 0 0 0 ] [a2] = [qdd0]
# [1 T T² T³ T⁴ T⁵ ] [a3] [qf ]
# [0 1 2T 3T² 4T³ 5T⁴ ] [a4] [qdf ]
# [0 0 2 6T 12T² 20T³ ] [a5] [qddf]
M = np.array([
[1, 0, 0, 0, 0, 0 ],
[0, 1, 0, 0, 0, 0 ],
[0, 0, 2, 0, 0, 0 ],
[1, T, T**2, T**3, T**4, T**5 ],
[0, 1, 2*T, 3*T**2, 4*T**3, 5*T**4 ],
[0, 0, 2, 6*T, 12*T**2, 20*T**3 ]
])
b = np.array([q0, qd0, qdd0, qf, qdf, qddf])
a = np.linalg.solve(M, b)
# Evaluate at time samples
t = np.linspace(0, T, int(T / 0.001))
q = sum(a[i] * t**i for i in range(6))
return t, q, a
# Compare profiles for same move
t_trap, q_trap = trapezoid_profile(0, 90, 120, 480)
t_quin, q_quin, _ = quintic_poly(0, 90, 0, 0, 0, 0, 1.0)
print(f"Trapezoidal: {t_trap[-1]:.3f}s to move 90°")
print(f"Quintic: {t_quin[-1]:.3f}s (same time, smoother motion)")
Cartesian trajectory planning interpolates the TCP position and orientation directly in task space. Linear move (LIN): the TCP travels in a straight line from A to B. The orientation interpolates smoothly using quaternion SLERP. At each small time step, the target Cartesian pose is converted to joint angles via IK. This is used for welding seams, cutting paths, and any application where the tool path shape matters. Arc move (CIRC): defines a circle through three points. The robot follows the arc at constant TCP speed. Critical difference from joint space: Cartesian paths can pass through singularities, causing the robot to stop or behave erratically. Always check the path for singularity proximity before running at production speed.
# Cartesian trajectory planning with quaternion interpolation
import numpy as np
def quat_from_rotation_matrix(R):
"""Extract quaternion [w, x, y, z] from 3×3 rotation matrix."""
trace = R[0,0] + R[1,1] + R[2,2]
if trace > 0:
s = 0.5 / np.sqrt(trace + 1)
w = 0.25 / s
x = (R[2,1] - R[1,2]) * s
y = (R[0,2] - R[2,0]) * s
z = (R[1,0] - R[0,1]) * s
else:
# Simplified — use full Shepperd's method in production
w, x, y, z = 1, 0, 0, 0
q = np.array([w, x, y, z])
return q / np.linalg.norm(q)
def slerp(q0, q1, t):
"""
Spherical Linear Interpolation between two quaternions.
t=0 → q0, t=1 → q1.
Produces the shortest, constant-speed rotation.
"""
# Ensure same hemisphere (shortest path)
if np.dot(q0, q1) < 0:
q1 = -q1
dot = np.clip(np.dot(q0, q1), -1, 1)
if dot > 0.9995:
# Very close — linear interpolation
q = q0 + t * (q1 - q0)
return q / np.linalg.norm(q)
angle = np.arccos(dot)
return (np.sin((1-t)*angle)*q0 + np.sin(t*angle)*q1) / np.sin(angle)
def linear_cartesian_trajectory(
T_start, T_end, v_tcp=100, a_tcp=500, dt=0.001
):
"""
Plan a straight-line TCP trajectory from T_start to T_end.
v_tcp: TCP speed (mm/s)
a_tcp: TCP acceleration (mm/s²)
dt: time step (s)
Returns list of 4×4 target transforms at each time step.
"""
p0 = T_start[:3, 3]
p1 = T_end[:3, 3]
R0 = T_start[:3, :3]
R1 = T_end[:3, :3]
q0 = quat_from_rotation_matrix(R0)
q1 = quat_from_rotation_matrix(R1)
distance = np.linalg.norm(p1 - p0)
direction = (p1 - p0) / (distance + 1e-9)
# Trapezoidal speed profile for TCP speed
t_acc = v_tcp / a_tcp
d_acc = 0.5 * a_tcp * t_acc**2
if 2 * d_acc > distance:
v_peak = np.sqrt(a_tcp * distance)
t_acc = v_peak / a_tcp
t_total = 2 * t_acc
else:
t_total = 2 * t_acc + (distance - 2 * d_acc) / v_tcp
waypoints = []
for ti in np.arange(0, t_total + dt, dt):
# TCP distance along path
s = min(ti / t_total, 1.0) # Normalised 0→1
# Position: linear interpolation
p = p0 + s * (p1 - p0)
# Orientation: SLERP
q = slerp(q0, q1, s)
# Convert quaternion back to rotation matrix
w, x, y, z = q
R = np.array([
[1-2*(y*y+z*z), 2*(x*y-w*z), 2*(x*z+w*y)],
[2*(x*y+w*z), 1-2*(x*x+z*z), 2*(y*z-w*x)],
[2*(x*z-w*y), 2*(y*z+w*x), 1-2*(x*x+y*y)]
])
T = np.eye(4)
T[:3, :3] = R
T[:3, 3] = p
waypoints.append(T)
return waypoints, t_total
print(f"SLERP interpolates the SHORTEST rotation between two orientations.")
print(f"Unlike Euler angle interpolation, SLERP never:")
print(f" - Goes through unexpected orientations")
print(f" - Suffers gimbal lock")
print(f" - Produces non-constant angular velocity")
print(f"All modern robot controllers use quaternion SLERP for orientation interpolation.")
In production, every millisecond of cycle time affects throughput and payback period. A robot performing 1000 pick-and-place cycles per hour, reduced by 0.1 seconds per cycle, saves 100 seconds/hour = 28 hours/year. Cycle time optimisation techniques: (1) Blend (zone) between moves — instead of stopping at each waypoint, the robot begins the next move while still approaching the current one, rounding corners at speed. (2) Reduce travel distance — optimise approach and retreat vectors to be as short as possible. (3) Use maximum allowed speed for each segment — not all segments need to be slow. (4) Parallelise — trigger peripherals (grippers, conveyors) during robot motion, not after it stops. (5) Increase acceleration — modern cobots can run higher accelerations than their defaults allow with proper payload declaration.
// Industrial robot cycle time analysis (pseudocode/ABB RAPID style)
//
// ═══════════════════════════════════════════════════
// Example: pick-and-place with cycle time optimisation
// ═══════════════════════════════════════════════════
//
// BEFORE optimisation (stop at each point):
// MoveJ pApproach, v500, fine, tool0; // to approach
// MoveL pPick, v100, fine, tool0; // slow down to pick
// GripperClose; // grip
// MoveL pApproach, v100, fine, tool0; // retreat
// MoveJ pPlaceApp, v500, fine, tool0; // to place approach
// MoveL pPlace, v100, fine, tool0; // slow down to place
// GripperOpen; // release
// MoveL pPlaceApp, v100, fine, tool0; // retreat
// Cycle time: ~4.2 seconds
//
// AFTER optimisation (blend zones + parallelism):
// MoveJ pApproach, v1000, z50, tool0; // approach at full speed, blend
// MoveL pPick, v100, fine, tool0; // slow for pick only
// GripperClose; // grip
// MoveL pApproach, v300, z30, tool0; // faster retreat, blend
// MoveJ pPlaceApp, v1000, z50, tool0; // fast transit, blend
// MoveL pPlace, v150, fine, tool0; // release
// GripperOpen; // can overlap with next move start
// MoveL pPlaceApp, v300, z50, tool0; // faster retreat
// Cycle time: ~2.1 seconds (50% improvement!)
//
// Zone (z) values: z5=5mm, z10=10mm, z50=50mm, z100=100mm
// z50 means: start next move when within 50mm of waypoint
// fine = stop exactly at point (needed only for pick/place)
//
// KEY RULE: Only use 'fine' (exact stop) where the process
// actually requires it. Using 'fine' everywhere is the
// most common cause of unnecessarily long cycle times.
//
// Parallel execution example (ABB RAPID):
// MoveJ pTransit, v1000, z100, tool0;
// WaitDI diConveyorReady, 1; // Wait for conveyor while moving!
// — Robot is moving AND waiting simultaneously —
// MoveL pNextPick, v200, fine, tool0;
Kinematics theory meets reality in the diversity of robot architectures and the demands of industrial applications. Every robot architecture — SCARA, delta, Cartesian, 6-DOF serial, 7-DOF cobot, parallel — has its own kinematic equations, workspace shape, and optimal application domain. Understanding which architecture to choose, and how to deploy it safely, is the final step from theory to engineering practice.
Answer the application requirements and get a scored recommendation for each robot architecture. This is the engineering decision matrix used by system integrators.
Required payload:
Speed priority:
Work type:
Human proximity:
| Standard | Scope | Key requirement |
|---|---|---|
| ISO 10218-1 | Industrial robot design | Robot manufacturer safety requirements (safety-rated stops, speed monitoring, force limits) |
| ISO 10218-2 | Robot integration | System integrator requirements — risk assessment, safeguarding, installation |
| ISO/TS 15066 | Collaborative operation | PFL force/pressure limits by body region, SSM speed/separation requirements |
| EN ISO 13849-1 | Safety functions | Performance Level (PL a–e) for safety functions — PLd or PLe required for most robot safety |
| IEC 62061 | Functional safety | Safety Integrity Level (SIL 1–3) — alternative to EN ISO 13849-1 for complex systems |
| ANSI/RIA R15.06 | North America robots | US/Canada equivalent of ISO 10218, mandatory for OSHA compliance |
Serial manipulators (KUKA KR, ABB IRB, FANUC R, Yaskawa Motoman): joints in a chain, each motor carries all subsequent links. Large workspace but limited stiffness — error compounds through the chain. Used for: welding, painting, machine tending, assembly. Parallel robots (ABB FlexPicker/delta, Gough-Stewart platform): all actuators connect directly to the end-effector platform via parallel kinematic chains. Very high stiffness, high speed, light moving mass — but small workspace and complex kinematics. Used for: high-speed pick-and-place (food, pharma, electronics), flight simulators, hexapod machining. SCARA (Selective Compliance Assembly Robot Arm): 3-DOF in horizontal plane + Z-axis. Extremely fast for planar assembly. Limited to horizontal work. Used for: PCB assembly, small part insertion, dispensing. Cartesian / Gantry: XYZ slides, often 3-DOF. Large work volume, high stiffness, simple kinematics (no rotational coupling). Used for: CNC, large format printing, palletising over large areas.
// Robot architecture selection guide
//
// ═══════════════════════════════════════════════════════════
// Architecture Workspace Speed Stiffness DOF
// ───────────────────────────────────────────────────────────
// 6-DOF serial Large Medium Low-Med 6
// 7-DOF cobot Large Low-Med Med 7
// SCARA Medium(2D) High High(XY) 4
// Delta/FlexPick Small Very High High 3+1
// Cartesian/gant Very Large Low-Med Very High 3-5
// Stewart/hexapd Small Med Very High 6
//
// ── Speed comparison (cycle time for 25mm pick-and-place) ──
// ABB FlexPicker IRB 360: 0.58 s / pick (up to 150 picks/min)
// FANUC M-3iA (delta): 0.60 s / pick (up to 130 picks/min)
// Adept SCARA eCobra 600: 0.45 s / pick (up to 120 picks/min)
// ABB IRB 1200 (6-DOF): 0.90 s / pick (up to 67 picks/min)
// Universal Robots UR5e: 1.20 s / pick (up to 50 picks/min)
//
// ── Payload vs speed tradeoff ──
// Higher payload → larger motors → heavier arm → lower max speed
// Low payload (≤3kg): delta, SCARA — maximum throughput
// Med payload (3-20kg): SCARA, 6-DOF mid-size
// High payload (20-100kg): 6-DOF, gantry
// Very high (100-1300kg): Large 6-DOF (automotive body, aerospace)
//
// ── Reach vs payload chart (6-DOF serial robots) ──
// Payload 5kg → KUKA KR 6 R700: 700mm reach
// Payload 10kg → ABB IRB 1200: 901mm reach
// Payload 20kg → FANUC M-20iD: 1813mm reach
// Payload 50kg → KUKA KR 60: 2033mm reach
// Payload 120kg → ABB IRB 6700: 3200mm reach
// Payload 210kg → FANUC R-2000iC: 2655mm reach
// Payload 1300kg → KUKA KR 1300 PA: 3300mm reach
Cobots (Universal Robots, FANUC CR, KUKA LBR iiwa, ABB YuMi, Doosan, Techman) are designed to work safely alongside humans without traditional safety caging. They achieve safety through power and force limiting (PFL), speed and separation monitoring (SSM), and hand-guiding. The KUKA LBR iiwa and Franka Emika have 7 DOF — one more than the minimum 6 needed for full position and orientation control. This extra DOF (redundancy) allows the robot to change its arm configuration while keeping the end-effector fixed — like a human who can move their elbow sideways while holding a coffee cup still. This enables singularity avoidance, obstacle avoidance, and optimisation of joint torques during motion. The kinematic redundancy is resolved using null-space projection: project desired motions into the null space of the Jacobian.
# 7-DOF redundant robot: null-space motion
import numpy as np
def null_space_projection(J):
"""
Compute null-space projector for a Jacobian J (m × n, n > m).
N = I - J^+ J
Any vector in the null space: dq = N · z (for any z)
produces dq that does NOT change end-effector velocity.
"""
J_pinv = np.linalg.pinv(J) # Moore-Penrose pseudo-inverse
N = np.eye(J.shape[1]) - J_pinv @ J
return N, J_pinv
def redundant_ik_step(J, e, q, q_preferred, w_null=0.3):
"""
IK step for redundant robot (n > 6).
Uses null-space motion to avoid singularities or joint limits.
e: 6×1 Cartesian error
q: current joint angles (n×1)
q_preferred: preferred joint configuration (n×1)
w_null: null-space weight (how strongly to attract to preferred)
Returns delta_q.
"""
N, J_pinv = null_space_projection(J)
# Primary task: minimise Cartesian error
dq_primary = J_pinv @ e
# Secondary task (null-space): move toward preferred configuration
# This could be: maximise manipulability, avoid joint limits,
# avoid obstacles, minimise torque, etc.
z = w_null * (q_preferred - q) # Gradient toward preferred config
dq_secondary = N @ z
return dq_primary + dq_secondary
# Cobot safety zones (ISO/TS 15066)
print("Cobot operating modes (ISO 10218 / ISO/TS 15066):")
print("")
print("SSM — Speed and Separation Monitoring:")
print(" Measure distance to human (LiDAR/camera)")
print(" Reduce TCP speed proportional to proximity")
print(" v_tcp_max = v_safe × d_human / d_safety_zone")
print(" At contact distance: stop within 0.15s (protective stop)")
print("")
print("PFL — Power and Force Limiting:")
print(" Robot continuously monitors joint torques")
print(" Contact force = torque_measured - torque_expected (from model)")
print(" At contact force > F_limit: enter safe stop")
print(" F_limit per ISO/TS 15066 Table A.2: 65-160N by body region")
print("")
print("HG — Hand Guiding:")
print(" Operator holds handle with 6-axis F/T sensor")
print(" Robot follows applied force in Cartesian admittance mode")
print(" F_applied → Cartesian velocity via admittance controller")
print(" τ = J^T F_applied (Jacobian transpose mapping)")
A nominal robot model (DH parameters from the datasheet) has errors. Link lengths are machined to ±0.1 mm, joint offsets differ from the nominal by up to ±0.5°, gear elasticity introduces compliance under load, and thermal expansion shifts everything by ±0.05 mm/°C. An uncalibrated 6-DOF robot has absolute positioning accuracy of 0.5–5 mm even though repeatability (returning to the same point) is ±0.02–0.05 mm. Calibration identifies the true DH parameters by measuring the actual TCP position at many configurations with an external reference (laser tracker, photogrammetry, calibrated gauge). After calibration, absolute accuracy improves to ±0.1–0.3 mm. This is essential for offline programming (OLP) where programs are generated in simulation and run on the real robot without teaching every point.
# Robot geometric calibration — identification of real DH parameters
import numpy as np
from scipy.optimize import least_squares
def fk_with_params(q_all, nominal_dh, param_deltas):
"""
FK with perturbed DH parameters for calibration.
nominal_dh: n × 4 array of nominal [theta_off, d, a, alpha]
param_deltas: flattened array of parameter corrections
Returns TCP position and orientation for each configuration.
"""
# Reshape deltas: 4 corrections per joint (Δθ_off, Δd, Δa, Δα)
deltas = param_deltas.reshape(len(nominal_dh), 4)
true_dh = nominal_dh + deltas
results = []
for q in q_all:
T = np.eye(4)
for i, (dh, qi) in enumerate(zip(true_dh, q)):
th_off, d, a, alpha = dh
T = T @ dh_transform(qi + th_off, d, a, alpha)
results.append(T[:3, 3]) # TCP position
return results
def calibration_residuals(param_deltas, q_all, measured_positions, nominal_dh):
"""
Residual function for least-squares calibration.
Minimise sum of (FK_position - measured_position)² over all configs.
"""
predicted = fk_with_params(q_all, nominal_dh, param_deltas)
residuals = []
for pred, meas in zip(predicted, measured_positions):
residuals.extend(pred - meas) # 3 residuals per configuration
return residuals
# ── Calibration procedure ──
# 1. Move robot to 30-50+ diverse configurations
# 2. Measure TCP position with laser tracker (±0.05mm accuracy)
# 3. Record joint angles for each configuration
# 4. Run optimisation to find param_deltas that minimise FK errors
#
# result = least_squares(
# calibration_residuals, x0=np.zeros(n_joints * 4),
# args=(joint_configs, measured_positions, nominal_dh),
# method='lm' # Levenberg-Marquardt
# )
# calibrated_dh = nominal_dh + result.x.reshape(n_joints, 4)
print("Calibration accuracy benchmarks:")
print(" Uncalibrated robot (nominal DH): ±1-5mm absolute")
print(" After geometric calibration (DH): ±0.1-0.5mm absolute")
print(" After full calibration (+ compliance): ±0.05-0.2mm absolute")
print(" Robot repeatability (unchanged): ±0.02-0.05mm")
print("")
print("Calibration tools:")
print(" Leica Absolute Tracker AT960: ±0.015mm (gold standard)")
print(" FARO Laser Tracker ION: ±0.025mm")
print(" Nikon Metrology MV224: ±0.030mm")
print(" Photogrammetry (cameras): ±0.1mm (low cost)")
print(" Robot flange F/T + gauge ball: ±0.2mm (in-situ, no tracker)")
Forward kinematics calculations, DH parameters, Jacobian singularities, trajectory profiles, IK solutions, cobot safety standards — real engineering questions.
A 2-DOF planar robot has link lengths L1 = 300 mm and L2 = 200 mm. With joint angles θ1 = 30° and θ2 = 45°, what is the X coordinate of the end-effector?
What is a "Denavit-Hartenberg (DH) parameter" and why are there exactly 4 parameters per joint?
A robot's Jacobian matrix has a determinant of zero at a particular configuration. What does this mean?
What is the difference between "joint space" and "Cartesian space" trajectory planning, and when should you use each?
A 6-DOF robot arm has inverse kinematics with 8 possible solution configurations (elbow up/down, shoulder left/right, wrist flipped). How does the robot controller choose which solution to use?
What is the "workspace" of a robot manipulator and what limits it?
A robot joint rotates from 0° to 90° in 1.0 second using a trapezoidal velocity profile with an acceleration phase of 0.25 s. What is the peak velocity?
What is "orientation representation" and why is Euler angles problematic for robot control?
What is the "tool centre point" (TCP) and why must it be calibrated accurately?
A collaborative robot (cobot) operates near humans. The ISO/TS 15066 standard defines power and force limiting (PFL). What contact force limits apply to sensitive body regions?