I/O

Analog I/O

Analog signals carry continuous, real-world quantities — temperature, pressure, flow, position, force — into the PLC as electrical signals. Where a digital input tells you a valve is open or closed, an analog input tells you exactly how far open it is, what pressure is behind it, and how fast the flow is. Mastering analog I/O means understanding the physics of transducers, the mathematics of signal conversion, noise immunity, calibration, and how every engineering value connects to raw counts on the backplane.

Depth
Beginner → Expert
Sections
6 + sim in each
Simulators
9 interactive tools
Quiz
10 MCQ + scoring

📺 Video Lesson

〜 Fundamentals of Analog Signals

An analog signal is a continuously variable electrical quantity — voltage or current — that represents a physical measurement. Unlike a digital signal with two states, an analog signal can take any value within its range, carrying precise information about the real world. Understanding what analog signals are, how they are standardised, and why certain standards exist is the foundation of every instrumentation and control project.

〜 Live 4–20 mA Scaling Calculator

Drag the loop current or raw counts. Watch engineering value, wire-break detection, and percentage position update in real time.

12.00 mA
0
100
SIGNAL POSITION
50.0%
Loop mA
12.00
Raw count (16-bit)
% of span
Eng. value
Wire-break?
NO
Signal zone
NORMAL
📊 Resolution vs Accuracy — visual comparison

Change bit depth and accuracy. See how quantisation steps relate to the accuracy band — and when adding more bits stops making any practical difference.

12-bit
0.10%
100
Total counts
Resolution/count
Accuracy ±
Limiting factor

Signal Standards: 4–20 mA, 0–10 V, and Why They Exist

Industrial analog signals are standardised to allow interoperability between instruments from different manufacturers. The dominant standards are 4–20 mA current loop (the universal process industry standard), 0–10 V (common in drives and HVAC), 0–20 mA (less common, no wire-break detection), ±10 V (motion control), and 1–5 V (older DCS systems). Current loops are preferred over voltage for long cable runs because current is immune to cable resistance — a 4 mA signal through 500 Ω of cable resistance still reads exactly 4 mA. Voltage signals suffer a drop of V = I × R and require high-impedance receivers to minimise loading error.

StandardRangeWire-break detectMax cable lengthPrimary use
4–20 mA4–20 mAYes (<3.8 mA)~1–2 km (0.5mm²)Process instruments
0–10 V0–10 V DCNo~100 mDrives, HVAC, PLCs
±10 V−10 to +10 VNo~100 mServo / motion control
0–5 V0–5 V DCNo~50 mSensors, embedded
1–5 V1–5 V DCYes (<0.8 V)~100 mLegacy DCS (NAMUR)
Fundamentals of Analog Signals
// Signal standard comparison
//
//  Standard    Range        Resolution  Wire-break?  Application
//  ─────────── ──────────── ────────────────────────────────────
//  4–20 mA     4–20 mA     —           YES (< 4mA)  Process, safety
//  0–20 mA     0–20 mA     —           NO           Less common
//  0–10 V      0–10 V      —           NO           Drives, HVAC
//  ±10 V       -10 to +10V —           NO           Motion/servo
//  1–5 V       1–5 V       —           YES (< 1V)   Legacy DCS
//  0–5 V       0–5 V       —           NO           Sensors, ADC
//  Hart        4–20 mA     ±0.1%       YES          Smart instruments
//
// Current loop advantages over voltage:
//   Immunity to cable resistance: V_load = I × R_load (not cable)
//   Max cable resistance for 4-20mA: V_supply - V_transmitter / I_max
//   E.g. 24V supply, 12V transmitter, 20mA max:
//   R_max = (24 - 12) / 0.020 = 600 Ω
//   At 23 Ω/km (0.75mm²): R_max distance = 600/23 = 26 km (round trip)
//   → 13 km one-way maximum cable length
//
// For voltage signals:
//   V_at_PLC = V_source × R_input / (R_cable + R_input)
//   At R_input = 100kΩ, R_cable = 100Ω:
//   Loading error = 100 / (100 + 100000) = 0.001 = 0.1% (acceptable)
//   At R_input = 1kΩ (poor design):
//   Loading error = 100 / (100 + 1000) = 9.1% (unacceptable!)

Engineering Value ↔ Raw Count Conversion

Every PLC analog input card converts the electrical signal (mA or V) to a raw integer count using its ADC. The PLC program must then convert this count to the engineering value (bar, °C, m³/h). This conversion is called scaling or linearisation. Getting this formula exactly right — including the live-zero offset for 4–20 mA — is critical. A scaling error of 1% on a flow meter means 1% of all flow is miscounted. On a 1000 t/h process running 8000 hours per year, that is 80,000 tonnes miscounted.

Fundamentals of Analog Signals
// Universal analog scaling function (IEC 61131-3 Structured Text)
// Converts raw ADC count to engineering value
// Works for: 4-20mA, 0-20mA, 0-10V, ±10V — any linear signal

FUNCTION SCALE_ANALOG : REAL
VAR_INPUT
  raw        : INT;    // Raw ADC count from PLC input register
  raw_lo     : INT;    // Count at signal minimum (e.g. 4mA = 6554 on 0-32767 card)
  raw_hi     : INT;    // Count at signal maximum (e.g. 20mA = 32767)
  eng_lo     : REAL;   // Engineering value at signal minimum (e.g. 0 bar)
  eng_hi     : REAL;   // Engineering value at signal maximum (e.g. 100 bar)
END_VAR
VAR
  span_raw   : REAL;
  span_eng   : REAL;
END_VAR

span_raw := INT_TO_REAL(raw_hi - raw_lo);
span_eng := eng_hi - eng_lo;

IF span_raw = 0.0 THEN
  SCALE_ANALOG := eng_lo;  // Prevent division by zero
  RETURN;
END_IF;

SCALE_ANALOG := eng_lo + (INT_TO_REAL(raw - raw_lo) / span_raw) * span_eng;
END_FUNCTION

// Usage examples:
// 4-20mA on 16-bit card (0-32767):
//   4mA  → raw_lo = 6554  (32767 × 4/20)
//   20mA → raw_hi = 32767
//   0-100 bar:
//   pressure := SCALE_ANALOG(raw:=AI_raw, raw_lo:=6554,
//                raw_hi:=32767, eng_lo:=0.0, eng_hi:=100.0);
//
// 0-10V on 12-bit card (0-4095):
//   temp := SCALE_ANALOG(raw:=AI_raw, raw_lo:=0,
//                raw_hi:=4095, eng_lo:=-20.0, eng_hi:=80.0);

Resolution, Accuracy, and the Difference Between Them

Resolution is the smallest change the ADC can detect: span / 2^n_bits. A 12-bit card reading 0–10V has resolution of 10/4096 = 2.44 mV — it cannot distinguish two voltages closer than 2.44 mV. Accuracy is how close the reading is to the true value — it includes ADC quantisation error, gain error, offset error, temperature drift, and noise. A card can have excellent resolution (16-bit = 0.15 mV on 0–10V) but poor accuracy (±0.5% of span = ±50 mV) — the high resolution is wasted. Always specify both resolution AND accuracy for your application. For critical process control (custody transfer, reactor control), specify accuracy as % of reading, not % of full scale.

Fundamentals of Analog Signals
// Resolution vs accuracy — worked example
// 16-bit analog input card, 4-20mA, 0-100 bar

VAR
  n_bits     : INT  := 16;          // ADC bit depth
  span_mA    : REAL := 16.0;        // 4-20mA live span (mA)
  span_eng   : REAL := 100.0;       // Engineering span (bar)
  resolution_counts : DINT;         // Total counts in span
  resolution_mA     : REAL;         // mA per count
  resolution_bar    : REAL;         // bar per count

  accuracy_pct      : REAL := 0.1;  // Datasheet: ±0.1% of full scale
  accuracy_mA       : REAL;         // ±mA
  accuracy_bar      : REAL;         // ±bar

  typical_noise_bits: INT  := 2;    // Effective bits lost to noise
  effective_bits    : INT;          // Noise-free resolution
  effective_res_bar : REAL;         // Effective resolution (bar)
END_VAR

resolution_counts := EXPT(2, n_bits) - 1;  // = 65535
resolution_mA     := span_mA / resolution_counts;  // = 0.000244 mA/count
resolution_bar    := span_eng / resolution_counts;  // = 0.00153 bar/count

accuracy_mA  := accuracy_pct / 100.0 * 20.0;  // = 0.02 mA = ±0.02 mA
accuracy_bar := accuracy_pct / 100.0 * 100.0; // = 0.1 bar = ±0.1 bar

// Noise limits effective resolution
effective_bits    := n_bits - typical_noise_bits;  // = 14 effective bits
effective_res_bar := span_eng / EXPT(2, effective_bits); // = 0.006 bar

// Conclusion:
//   Resolution per count: 0.00153 bar (theoretical)
//   Noise-limited resolution: 0.006 bar (practical)
//   Accuracy: ±0.1 bar (dominates everything!)
//   → Despite 16-bit resolution, measurement uncertainty = ±0.1 bar
//   → A 12-bit card (±0.05% accuracy) would be EQUALLY useful here

📡 Sensors & Transducers

A transducer converts a physical quantity into an electrical signal. The sensor is the sensing element; the transmitter conditions the raw sensor output (often millivolts) into a standardised 4–20 mA or digital signal. Understanding the physics of each sensor type — and its failure modes — is essential for selecting the right technology and diagnosing faults without replacing working hardware.

📡 RTD & Thermocouple Calculator

Enter temperature to get PT100 resistance and thermocouple EMF. Or enter a measured resistance to calculate temperature. See the effect of 2-wire vs 4-wire connection and cable length.

100°C
10 m
Temperature
Signal output
2-wire error
3-wire error
4-wire error
Sensitivity
🌊 Differential Pressure Flow — Square Root Extraction

See the critical difference between linear scaling and square-root extraction for DP flow meters. Toggle the calculation method to see how large the error becomes at mid-range.

50%
1000
DP %
Correct flow (√)
Wrong (linear)
Error

Temperature: RTDs, Thermocouples & Thermistors

RTDs (Resistance Temperature Detectors) measure temperature through the predictable change in metal resistance. PT100 (100 Ω at 0°C, platinum) is the global standard: α = 3.85×10⁻³ /°C, range −200°C to +850°C, accuracy class A ±0.15°C at 0°C. Thermocouples generate a small EMF (40–60 µV/°C) from the Seebeck effect at the junction of two dissimilar metals. Type K (chromel-alumel) is the most common: 41 µV/°C, range −200°C to +1260°C — rugged and cheap but lower accuracy. Thermistors are semiconductor devices with large resistance change per degree — used for precise measurement over narrow ranges (e.g. −40°C to +125°C in HVAC).

TypeRangeSensitivityAccuracyBest for
PT100 RTD−200 to +850°C0.385 Ω/°CClass A: ±0.15°C @ 0°CPrecision, −100 to 500°C
PT1000 RTD−200 to +850°C3.85 Ω/°CSame as PT100Long cable runs (high R)
Type K T/C−200 to +1260°C41 µV/°C±2.2°C or ±0.75%High temp, rugged, cheap
Type J T/C−210 to +760°C52 µV/°C±2.2°C or ±0.75%Legacy systems, lower cost
Type T T/C−270 to +370°C43 µV/°C±1.0°C or ±0.75%Cryogenics, food industry
NTC Thermistor−40 to +125°C~3%/°C (@ 25°C)±0.1°C (narrow range)HVAC, consumer, narrow range
Sensors & Transducers
// RTD resistance calculation and measurement
// PT100 Callendar-Van Dusen equation (IEC 60751)
//   Above 0°C:  R(T) = R0 × (1 + A×T + B×T²)
//   Below 0°C:  R(T) = R0 × (1 + A×T + B×T² + C×(T-100)×T³)
//   Constants: A = 3.9083×10⁻³, B = -5.775×10⁻⁷, C = -4.183×10⁻¹²
//   R0 = 100 Ω for PT100, 1000 Ω for PT1000

// Simplified linear approximation (valid ±0.5°C up to 200°C):
// R(T) ≈ 100 × (1 + 0.00385 × T)
// T ≈ (R - 100) / (100 × 0.00385) = (R - 100) / 0.385

// PLC scaling from RTD transmitter 4-20mA output:
// Transmitter configured: 0°C = 4mA, 200°C = 20mA
FUNCTION RTD_to_Celsius : REAL
VAR_INPUT
  raw_count : INT;       // 0–32767 (16-bit card, 4–20mA)
  T_lo      : REAL := 0.0;    // °C at 4 mA
  T_hi      : REAL := 200.0;  // °C at 20 mA
END_VAR
  RTD_to_Celsius := SCALE_ANALOG(raw:=raw_count,
    raw_lo:=6554, raw_hi:=32767, eng_lo:=T_lo, eng_hi:=T_hi);
END_FUNCTION

// Thermocouple cold junction compensation:
// TC output = V_hot_junction - V_cold_junction
// Without CJC: if ambient = 25°C, K-type TC reads 25°C LOW
// CJC measures ambient temperature at transmitter terminals
// Corrected reading = TC_mV + CJC_mV(ambient)
// Modern transmitters perform CJC internally
// Error if CJC fails: ≈ ambient temperature error in reading

// 3-wire vs 2-wire RTD:
// 2-wire: cable resistance adds to measurement
//   At 100m, 0.75mm²: Rcable = 24.4×2×0.1 = 4.9Ω → 12.7°C error!
// 3-wire: transmitter subtracts one cable resistance
//   Error reduced to ΔR_mismatch ≈ 0.02Ω → 0.05°C
// 4-wire: separate excitation and measurement → zero cable error
//   Use for laboratory and custody transfer accuracy

Pressure Sensors: Gauge, Absolute, Differential

Pressure sensors convert mechanical force per unit area into an electrical signal. Gauge pressure sensors measure relative to atmospheric pressure — used for tanks, pipes, and hydraulics. Absolute sensors measure relative to a perfect vacuum — used for barometric reference and gas processes. Differential pressure sensors measure the difference between two process pressures — critical for flow measurement (orifice plate, Venturi) and filter monitoring. The sensing element is almost always a piezoresistive silicon bridge or capacitive membrane. Diaphragm area, material, and fill fluid determine range, overpressure protection, and chemical compatibility.

Sensors & Transducers
// Differential pressure flow measurement
// Orifice plate + DP transmitter: flow ∝ √(ΔP)
// This non-linearity MUST be compensated in the PLC

FUNCTION DP_to_Flow : REAL
VAR_INPUT
  dp_raw     : INT;    // Raw ADC count from DP transmitter
  dp_lo      : REAL := 0.0;    // mbar at 4 mA
  dp_hi      : REAL := 250.0;  // mbar at 20 mA (max DP)
  flow_max   : REAL := 1000.0; // m³/h at dp_hi
  cutoff_pct : REAL := 0.5;    // % DP below which flow = 0 (noise)
END_VAR
VAR
  dp_mbar    : REAL;
  dp_ratio   : REAL;  // dp / dp_max
END_VAR

// Scale DP transmitter output to engineering units
dp_mbar := SCALE_ANALOG(raw:=dp_raw, raw_lo:=6554,
              raw_hi:=32767, eng_lo:=dp_lo, eng_hi:=dp_hi);

// Square-root extraction (flow ∝ √ΔP)
dp_ratio := dp_mbar / dp_hi;

// Low-flow cut-off: below cutoff, flow reads 0 (eliminates noise)
IF dp_ratio < (cutoff_pct / 100.0) THEN
  DP_to_Flow := 0.0;
  RETURN;
END_IF;

DP_to_Flow := flow_max * SQRT(dp_ratio);
END_FUNCTION

// Without square-root extraction:
// At ΔP = 100 mbar (40% of 250 mbar range):
//   Linear scaling: flow = 40% × 1000 = 400 m³/h (WRONG)
//   Correct:        flow = √(100/250) × 1000 = 632 m³/h
// Error = 232 m³/h = 37% — catastrophic for process control!

Position, Force & Other Analog Sensors

Linear position: LVDTs (Linear Variable Differential Transformers) provide highly accurate, robust position measurement with no contact wear — used in valve positioners and press monitoring. Potentiometers (resistive) are cheap but wear over millions of cycles. Magnetostrictive sensors provide 1 µm resolution over metres — used in hydraulic cylinders. Force/load cells use strain gauges in a Wheatstone bridge: 4-wire excitation, 2-wire differential output (mV/V). A 100 kg load cell at 2 mV/V excitation with 10V supply produces 20 mV at full scale — requiring a precision amplifier. pH, conductivity, dissolved oxygen, and level transducers each have their own physics and calibration requirements.

Sensors & Transducers
// Load cell signal conditioning
// Full bridge strain gauge: 4-wire excitation, 2-wire signal
// Output: mV proportional to load

VAR
  excitation_V   : REAL := 10.0;   // Supply to bridge (V)
  sensitivity    : REAL := 2.0;    // mV/V rated output
  capacity_kg    : REAL := 1000.0; // Full scale load (kg)
  raw_mV         : REAL;           // Measured bridge output (mV)
  load_kg        : REAL;           // Calculated load

  // ADC for load cell (typically 24-bit sigma-delta)
  adc_raw        : DINT;           // 24-bit count (0 to 16777215)
  adc_fullscale  : DINT := 16777215;
  adc_ref_V      : REAL := 0.02;   // ADC reference = 20mV = rated output
END_VAR

// Full scale signal:
// V_signal_max = excitation × sensitivity = 10 × 0.002 = 20 mV
// ADC must be configured for ±20 mV range (or use dedicated load cell ADC)

// Convert ADC count to load:
raw_mV := DINT_TO_REAL(adc_raw) / DINT_TO_REAL(adc_fullscale)
          * (adc_ref_V * 1000.0);  // Convert to mV
load_kg := raw_mV / (excitation_V * sensitivity) * capacity_kg;
// = raw_mV / 20 × 1000 kg

// Important considerations:
//   Cable resistance affects bridge balance (use 6-wire for long runs)
//   Temperature coefficient of gauge factor: ≈ 0.01%/°C
//   Creep: load cell output drifts over time at constant load
//   Hysteresis: loading vs. unloading curve mismatch ≈ 0.02% FS
//   Shock protection: overload mechanical stop at 150% FS

// pH sensor:
//   Output: ~59.16 mV/pH unit at 25°C (Nernst equation)
//   At pH 7 (neutral): 0 mV reference
//   At pH 0: +414 mV, at pH 14: −414 mV
//   Temperature compensation: MANDATORY (sensitivity varies with T)
//   Calibration: 2-point minimum (pH 4 and pH 7 buffers)
//   Electrode lifetime: 6–12 months typical

🔧 Signal Conditioning

Raw sensor signals are often too small, too noisy, non-standard, or electrically incompatible to connect directly to a PLC input. Signal conditioning prepares the signal for digitisation: amplification, filtering, isolation, linearisation, and impedance matching. Understanding what happens between the sensor terminals and the ADC explains most measurement errors and noise problems.

🔧 Analog Filter — Time & Frequency Domain

Adjust filter cutoff and order. See the time-domain step response and frequency response simultaneously. Watch how noise is reduced versus how fast the filter responds to real changes.

10 Hz
1st order
5 Hz
Cutoff f_c
Attenuation @ sig
Phase lag @ f_c
Step 90% time

Amplification, Offset & Instrumentation Amplifiers

An instrumentation amplifier (INA) is the standard front-end for small differential signals. Unlike a simple op-amp, it has very high input impedance on both inputs, configurable gain with a single resistor, and high CMRR (80–120 dB). It is used for thermocouples (20–60 mV full scale), strain gauges (10–30 mV FS), and any high-impedance sensor. The gain resistor sets amplification: typically G = 1 + 50kΩ/Rg (for INA128). At G = 100 (Rg = 505Ω), a 20 mV thermocouple signal becomes 2.0 V — suitable for a standard ADC input. Bias current, offset voltage, and noise spectral density (nV/√Hz) are the key specifications.

Signal Conditioning
// Instrumentation amplifier noise budget
// Application: K-type thermocouple, range 0-500°C
// Signal: 0 mV (0°C) to 20.5 mV (500°C) — K-type sensitivity 41µV/°C
// Required temperature resolution: 0.5°C → 20.5µV signal change

VAR
  // INA128 specifications:
  gain           : REAL := 100.0;  // G = 1 + 50kΩ/505Ω
  en_density     : REAL := 8.0;    // Input noise: 8 nV/√Hz
  bandwidth_Hz   : REAL := 10.0;   // Useful bandwidth after filtering

  // Signal:
  signal_mV      : REAL := 20.5;   // Full scale (500°C)
  resolution_uV  : REAL := 20.5;   // 0.5°C = 20.5 µV

  // Noise calculation:
  noise_rms_nV   : REAL;   // Input-referred RMS noise
  noise_rms_uV   : REAL;
  SNR            : REAL;
  bits_effective : REAL;
END_VAR

// RMS noise = noise_density × √bandwidth
noise_rms_nV := en_density * SQRT(bandwidth_Hz);  // = 8 × √10 = 25.3 nV
noise_rms_uV := noise_rms_nV / 1000.0;            // = 0.0253 µV

// Signal-to-noise ratio
SNR := 20.0 * LOG(signal_mV * 1000.0 / noise_rms_uV) / LOG(10.0);
// = 20 × log(20500/0.0253) = 20 × log(810,000) = 118 dB

// Effective bits = SNR / 6.02
bits_effective := SNR / 6.02;  // = 118/6.02 = 19.6 effective bits

// Conclusion: INA128 noise is far below the 20.5 µV resolution target
// The ADC (12-bit = 72 dB SNR) is the limiting factor, not the amp
// Use ≥ 16-bit ADC to exploit the INA's low noise performance

Filtering: RC, Active, Digital — Choosing Correctly

Filters remove unwanted frequency content from the signal. In analog I/O, you need to remove: 50/60 Hz mains hum (from power cables), high-frequency switching noise from VFDs (2–20 kHz), and broadband thermal/shot noise. Hardware RC filters before the ADC are mandatory to prevent aliasing — the Nyquist theorem requires the anti-aliasing filter cutoff to be below half the ADC sample rate. Digital filters (moving average, IIR, Kalman) in the PLC add additional noise rejection with no hardware cost but add latency — critical for fast control loops.

Signal Conditioning
// Anti-aliasing filter design
// ADC sample rate: 1000 Hz (1ms cycle PLC analog scan)
// Nyquist frequency: 500 Hz
// Anti-aliasing filter must attenuate signals above 500 Hz
// Choose cutoff fc = 100 Hz (5× safety margin)

// RC filter: R = 1/(2π×fc×C)
// Choose C = 100nF: R = 1/(2π×100×100e-9) = 15.9 kΩ → use 16kΩ
// Actual fc = 1/(2π×16000×100e-9) = 99.5 Hz ≈ 100 Hz

// Attenuation at 500 Hz (Nyquist): 20×log(fc/f) = 20×log(100/500) = -14 dB
// Not enough! For 1% alias rejection need > -40 dB at 500 Hz
// → Need 2nd order filter or lower fc

// 2nd order Sallen-Key: -40 dB/decade → -40 dB at 10×fc = 1 kHz
// Set fc = 50 Hz: -40 dB at 500 Hz ✓ (aliasing suppressed)

// Digital moving average filter in PLC:
FUNCTION MA_Filter : REAL
VAR_INPUT
  new_sample  : REAL;
  n_samples   : INT := 16;  // Averaging window
END_VAR
VAR_IN_OUT
  buffer      : ARRAY[0..31] OF REAL;
  index       : INT;
  sum         : REAL;
END_VAR
  sum := sum - buffer[index] + new_sample;
  buffer[index] := new_sample;
  index := (index + 1) MOD n_samples;
  MA_Filter := sum / INT_TO_REAL(n_samples);
END_FUNCTION

// Exponential filter (IIR first-order) — better for control loops:
// y[n] = α × x[n] + (1-α) × y[n-1]
// α = Tscan / (τ_filter + Tscan)
// For τ = 500ms, Tscan = 10ms: α = 10/(500+10) = 0.0196
// Time constant maintained regardless of scan time changes
FUNCTION EXP_Filter : REAL
VAR_INPUT
  new_val  : REAL;
  tau_ms   : REAL := 500.0;  // Filter time constant (ms)
  scan_ms  : REAL := 10.0;   // PLC scan time (ms)
END_VAR
VAR_IN_OUT
  prev_out : REAL;
END_VAR
VAR
  alpha    : REAL;
END_VAR
  alpha    := scan_ms / (tau_ms + scan_ms);
  prev_out := alpha * new_val + (1.0 - alpha) * prev_out;
  EXP_Filter := prev_out;
END_FUNCTION

Isolation, Loop Power & HART Protocol

Galvanic isolation breaks the electrical connection between field and control system while allowing signal to pass — through transformers, optocouplers, or capacitors. Essential when field devices have different ground potentials (prevents ground loops), when explosive atmospheres require intrinsic safety barriers, and when high-voltage transients may appear on field wiring. The HART (Highway Addressable Remote Transducer) protocol superimposes a ±0.5 mA FSK digital signal (1200/2200 Hz) on the 4–20 mA loop. The average current is unchanged (no effect on the 4–20 mA measurement), but a HART modem can read device identification, calibration data, diagnostics, and secondary variables from smart transmitters.

Signal Conditioning
// HART protocol — what data is accessible
// HART commands (subset of most useful):
//
// CMD 0:  Read unique identifier (manufacturer, device type, serial)
// CMD 1:  Read primary variable (PV) and units
// CMD 2:  Read PV, % range, loop current, span
// CMD 3:  Read dynamic variables (PV, SV, TV, QV)
// CMD 12: Read message (user-defined string)
// CMD 13: Read tag, descriptor, date
// CMD 14: Read transducer information (serial, limits, unit code)
// CMD 15: Read device info (range values, signal code)
// CMD 17: Write message
// CMD 35: Write primary variable range values (recalibrate)
// CMD 44: Write primary variable units
// CMD 45: Trim primary variable zero
// CMD 46: Trim primary variable gain
//
// HART modem in PLC (e.g. Siemens AI module with integrated HART):
// Reads CMD 3 data automatically every 500ms
// Results available in extended I/O data area:

VAR
  // From HART extended data block:
  pv_value    : REAL;    // Primary variable (same as 4-20mA value)
  sv_value    : REAL;    // Secondary variable (e.g. temperature from DP cell)
  loop_mA     : REAL;    // Actual loop current (diagnostic)
  device_status: WORD;  // Device health flags
  // Bit 0: Primary variable out of limits
  // Bit 1: Non-primary variable out of limits
  // Bit 2: Loop current saturated
  // Bit 4: More status available
  // Bit 5: Cold start (device just powered)
  // Bit 7: Device malfunction
END_VAR

// HART wire-break detection:
// If transmitter fails: loop_mA → 0 or > 21.5 mA (NAMUR NE43)
// NAMUR NE43 alarm levels:
//   < 3.6 mA: sensor/cable failure → substitute value DOWN
//   > 21.0 mA: sensor/cable failure → substitute value UP
// These saturate the 4-20mA output to signal the failure

💻 PLC Programming for Analog I/O

Reading an analog input and writing an analog output require more careful programming than digital I/O. Every analog channel needs scaling, filtering, clamping, and alarm monitoring. Analog outputs need rate limiting, safe-state handling, and bumpless transfer between manual and automatic modes.

💻 Analog Input Channel Dashboard — live simulation

Simulate a live analog process with noise, drift, and fault injection. Watch scaling, filtering, alarms, and wire-break detection react in real time exactly as the AI_Process function block does.

50.0%
2.0%
500 ms
85%
15%
Raw PV (noisy)
Filtered PV
Raw count
Loop mA
Alarm H
OK
Alarm L
OK
Wire break
NO
Status
OK
🔄 PID Analog Control Loop — pressure vessel simulation

A simulated pressure vessel with an inlet control valve (AO) and a pressure transmitter (AI). Tune the PID, change the setpoint, inject load disturbances, and watch the closed-loop response.

5.0 bar
1.5
30 s
3 s
Pressure PV
Setpoint
Error
Valve output
Integral
Mode
AUTO

Complete Analog Input Processing Block

A production-quality analog input processing block does more than scale counts to engineering units. It applies a filter, detects wire-break (for 4–20 mA), applies high/low alarms, clamps the output to safe limits, and provides a substitute value when the sensor fails. This block is called once per input channel, every scan. Implementing it as a function block allows identical, auditable processing for every analog channel.

PLC Programming for Analog I/O
// Analog Input Processing Function Block
// Complete production-quality implementation
FUNCTION_BLOCK AI_Process
VAR_INPUT
  raw_count   : INT;         // PLC hardware input register
  raw_lo      : INT  := 6554;   // Count at 4 mA
  raw_hi      : INT  := 32767;  // Count at 20 mA
  eng_lo      : REAL := 0.0;
  eng_hi      : REAL := 100.0;
  filter_tau  : REAL := 1000.0; // ms filter time constant
  scan_ms     : REAL := 10.0;   // PLC scan time ms
  alarm_hi    : REAL := 95.0;   // High alarm threshold
  alarm_lo    : REAL := 5.0;    // Low alarm threshold
  sub_value   : REAL := 50.0;   // Substitute value on fault
  enable      : BOOL := TRUE;
END_VAR
VAR_OUTPUT
  pv          : REAL;        // Process value (engineering units)
  pv_raw      : REAL;        // Unfiltered value
  alarm_H     : BOOL;        // High alarm active
  alarm_L     : BOOL;        // Low alarm active
  wire_break  : BOOL;        // Wire break / under-range detected
  status_ok   : BOOL;        // Channel healthy
END_VAR
VAR
  pv_filt     : REAL;        // Internal filter state
  alpha       : REAL;
  span_raw    : REAL;
  span_eng    : REAL;
  WIRE_BREAK_THRESHOLD : INT := 5000;  // Below this = wire break
END_VAR

IF NOT enable THEN
  pv := sub_value; status_ok := FALSE; RETURN;
END_IF;

// Wire break detection (4-20mA only)
wire_break := raw_count < WIRE_BREAK_THRESHOLD;
IF wire_break THEN
  pv := sub_value; status_ok := FALSE;
  alarm_H := FALSE; alarm_L := FALSE;
  RETURN;
END_IF;

// Scaling
span_raw := INT_TO_REAL(raw_hi - raw_lo);
span_eng := eng_hi - eng_lo;
pv_raw   := eng_lo + (INT_TO_REAL(raw_count - raw_lo) / span_raw) * span_eng;

// Clamp to slightly beyond range (allow ±5% over/under-range)
pv_raw := LIMIT(eng_lo - 0.05*span_eng, pv_raw, eng_hi + 0.05*span_eng);

// Exponential filter
alpha   := scan_ms / (filter_tau + scan_ms);
pv_filt := alpha * pv_raw + (1.0 - alpha) * pv_filt;
pv      := pv_filt;

// Alarms (with 1% hysteresis)
alarm_H := pv >= alarm_hi OR (alarm_H AND pv > alarm_hi * 0.99);
alarm_L := pv <= alarm_lo OR (alarm_L AND pv < alarm_lo * 1.01);

status_ok := TRUE;
END_FUNCTION_BLOCK

Analog Output: Rate Limiting, Safe State & Bumpless Transfer

Analog outputs need special care. A sudden step change in output (e.g. valve jumps from 0% to 100%) causes hydraulic hammer, mechanical shock, or process upsets. Rate limiting prevents this. On PLC fault or CPU stop, the output must go to a defined safe state (typically 4 mA = 0% or the last good value held). Bumpless transfer ensures smooth changeover between manual and automatic mode — the integrator output is pre-loaded with the current manual setpoint before switching to auto, preventing a step change.

PLC Programming for Analog I/O
// Analog Output Processing Function Block
FUNCTION_BLOCK AO_Process
VAR_INPUT
  setpoint_eng : REAL;       // Commanded value (engineering units)
  eng_lo       : REAL := 0.0;
  eng_hi       : REAL := 100.0;
  raw_lo       : INT  := 6554;   // DAC count at 4 mA
  raw_hi       : INT  := 32767;  // DAC count at 20 mA
  rate_limit   : REAL := 10.0;   // Max change per second (eng units/s)
  scan_ms      : REAL := 10.0;
  safe_value   : REAL := 0.0;    // Output on fault
  fault_active : BOOL := FALSE;
END_VAR
VAR_OUTPUT
  raw_out      : INT;         // Write to PLC hardware output register
  actual_eng   : REAL;        // Actual output in engineering units
END_VAR
VAR
  limited_sp   : REAL;        // Rate-limited setpoint
  max_step     : REAL;        // Max step this scan
  span_raw     : REAL;
  span_eng     : REAL;
END_VAR

// On fault: safe state
IF fault_active THEN
  actual_eng := safe_value;
  limited_sp := safe_value;  // Reset rate limiter state
  // Convert to raw and write
  span_raw  := INT_TO_REAL(raw_hi - raw_lo);
  span_eng  := eng_hi - eng_lo;
  raw_out   := raw_lo + REAL_TO_INT((actual_eng - eng_lo) / span_eng * span_raw);
  RETURN;
END_IF;

// Rate limiting
max_step   := rate_limit * scan_ms / 1000.0;  // Per scan
limited_sp := LIMIT(limited_sp - max_step,
                     setpoint_eng,
                     limited_sp + max_step);
actual_eng := limited_sp;

// Clamp to output range
actual_eng := LIMIT(eng_lo, actual_eng, eng_hi);

// Scale to DAC counts
span_raw := INT_TO_REAL(raw_hi - raw_lo);
span_eng := eng_hi - eng_lo;
raw_out  := raw_lo + REAL_TO_INT(
              (actual_eng - eng_lo) / span_eng * span_raw);
END_FUNCTION_BLOCK

PID Control Loop — Analog in, Analog out

The most common use of analog I/O is closing a control loop: measure a process variable (temperature, pressure, level), compare to setpoint, compute a correction, and drive a final control element (valve, heater, pump speed). IEC 61131-3 provides the CONT_C (Continuous Controller) function block. Understanding PID tuning — Kp, Ti, Td, and their interaction with process dynamics — is the core skill of process control engineering.

PLC Programming for Analog I/O
// PID control loop — pressure control example
// PV: pressure transmitter 0-10 bar (4-20mA)
// MV: control valve 0-100% (4-20mA positioner)
VAR
  pressure_AI  : AI_Process;  // Analog input function block
  valve_AO     : AO_Process;  // Analog output function block
  pid          : PID;

  // PID parameters (tune for process):
  Kp           : REAL := 1.5;    // Proportional gain
  Ti           : REAL := 30.0;   // Integral time (s) — reset time
  Td           : REAL := 3.0;    // Derivative time (s) — rate time
  Tscan        : REAL := 0.1;    // PLC cycle time (s) = 100ms

  // Setpoint and mode
  pressure_sp  : REAL := 5.0;    // Bar setpoint
  auto_mode    : BOOL := TRUE;
  manual_output: REAL := 50.0;   // % valve open in manual

  // Outputs
  valve_pct    : REAL;           // 0-100% valve demand
END_VAR

// 1. Read and process analog input
pressure_AI(raw_count:=AI_pressure_raw, eng_lo:=0.0, eng_hi:=10.0,
            alarm_hi:=9.5, alarm_lo:=0.2, filter_tau:=200.0);

// 2. PID controller
pid(
  ACTUAL     := pressure_AI.pv,    // Process variable
  SET_POINT  := pressure_sp,       // Setpoint
  KP         := Kp,
  TI         := Ti,
  TD         := Td,
  CYCLE      := Tscan,
  MANUAL     := NOT auto_mode,
  MANUAL_IN  := manual_output / 100.0,  // Bumpless transfer
  LMN        => valve_pct_norm          // 0.0-1.0 output
);
valve_pct := valve_pct_norm * 100.0;

// 3. Write analog output
valve_AO(setpoint_eng:=valve_pct, rate_limit:=20.0,  // 20%/s max
          fault_active:=NOT pressure_AI.status_ok);
AO_valve_raw := valve_AO.raw_out;

// Anti-windup: integral is automatically limited in CONT_C
// Bumpless transfer: MANUAL_IN pre-loads integrator
// Derivative filter: CONT_C applies built-in derivative filter
// Output clamping: 0-100% with back-calculation anti-windup

🎯 Calibration, Accuracy & Traceability

Calibration establishes the relationship between a measuring instrument's indication and the true value, using reference standards traceable to national metrology institutes. In process industries, calibration is a regulatory requirement. Understanding calibration procedures, error budgets, and how to maintain traceability is essential for any analog measurement system used in safety, quality, or custody transfer.

🎯 Interactive Two-Point Calibration

Walk through a real two-point calibration. The transmitter has injected zero and span errors. Apply reference signals and correct them. See error before and after calibration.

Zero error
Span error
Before accuracy
After accuracy

Error Budget Reference

Error sourceTypical valueCorrectable?Combination method
Transmitter ref. accuracy±0.05–0.2% FSBy calibrationRSS (random)
Temperature drift±0.02–0.1%/10°CPartialSystematic (add)
Long-term drift±0.05–0.2%/yearBy recalibrationSystematic
ADC quantisation±0.5 LSBNo (inherent)RSS
ADC gain error±0.05–0.1% FSBy calibrationSystematic
Cable noise (EMI)±0.01–0.5% FSBy shieldingRSS
Ground loop±0.5–5% FSBy isolationSystematic

Two-Point Calibration: Zero, Span & Trim

Two-point calibration corrects both offset error and gain error. Apply a known zero-point input (e.g. 4 mA from a calibrator), read the PLC value, adjust the zero parameter until the PLC reads the correct engineering value. Then apply a known span-point input (e.g. 20 mA), adjust the span parameter. This corrects the linear transfer function completely. For HART transmitters, CMD 45 trims zero and CMD 46 trims gain — the correction is stored in the transmitter itself, not in the PLC.

Calibration, Accuracy & Traceability
// Two-point calibration procedure — structured text implementation
// For a pressure transmitter 0-100 bar on a 4-20mA loop

VAR
  calib_mode      : BOOL := FALSE;  // Operator enables calibration
  calib_step      : INT  := 0;      // 0=idle, 1=zero, 2=span, 3=done
  ref_value_zero  : REAL := 0.0;    // Known reference at zero (bar)
  ref_value_span  : REAL := 100.0;  // Known reference at span (bar)
  raw_at_zero     : INT;            // Captured raw count at zero
  raw_at_span     : INT;            // Captured raw count at span
  calibrated_lo   : INT;            // Calibrated raw_lo
  calibrated_hi   : INT;            // Calibrated raw_hi
  calib_confirm   : BOOL;
  calib_done      : BOOL;
  // Calculated correction:
  ideal_lo        : INT := 6554;    // Expected 4mA count
  ideal_hi        : INT := 32767;   // Expected 20mA count
  zero_error_mA   : REAL;           // Offset error in mA
  span_error_pct  : REAL;           // Gain error in %
END_VAR

CASE calib_step OF
  0: // IDLE
    IF calib_mode THEN calib_step := 1; calib_done := FALSE; END_IF;

  1: // ZERO POINT — operator applies 4 mA (or 0 bar reference)
    // Display instruction on HMI: "Apply zero reference signal"
    IF calib_confirm THEN
      raw_at_zero := AI_raw_count;  // Capture raw count
      calibrated_lo := raw_at_zero;
      zero_error_mA := INT_TO_REAL(raw_at_zero - ideal_lo)
                       / INT_TO_REAL(ideal_hi - ideal_lo) * 16.0;
      calib_confirm := FALSE;
      calib_step := 2;
    END_IF;

  2: // SPAN POINT — operator applies 20 mA (or 100 bar reference)
    IF calib_confirm THEN
      raw_at_span := AI_raw_count;
      calibrated_hi := raw_at_span;
      span_error_pct := (INT_TO_REAL(raw_at_span - raw_at_zero)
                        / INT_TO_REAL(ideal_hi - ideal_lo) - 1.0) * 100.0;
      calib_confirm := FALSE;
      calib_step := 3;
    END_IF;

  3: // APPLY calibration and save to retain memory
    AI_block.raw_lo := calibrated_lo;
    AI_block.raw_hi := calibrated_hi;
    calib_done := TRUE;
    calib_step := 0;
END_CASE;
// Log calibration: date, technician, before/after values, certificate number

Error Budget Analysis

An error budget quantifies every source of measurement uncertainty and combines them to predict the total system accuracy. Sources include: transmitter accuracy (% of span), transmitter temperature drift (% of span/°C), ADC gain and offset error, cable noise (common mode rejection), PLC filter lag (dynamic error during transient), and digitisation error (quantisation noise). Errors combine as root-sum-of-squares (RSS) if independent and random, or linearly if systematic and correlated. A formal error budget is required for any measurement used in custody transfer, safety shutdown systems, or regulatory compliance.

Calibration, Accuracy & Traceability
// Error budget calculation — pressure measurement system
// Transmitter: Endress+Hauser PMC51, 0-100 bar, 4-20mA
// PLC card: Siemens SM331 AI 8×12bit

// Error source breakdown (all as % of full scale):
VAR
  // Transmitter errors:
  e_transmitter_ref   : REAL := 0.075;  // ±0.075% FS at reference conditions
  e_transmitter_temp  : REAL := 0.05;   // ±0.05%/10°C × 20°C range = 0.1%
  e_transmitter_long  : REAL := 0.1;    // Long-term drift per year
  e_transmitter_vib   : REAL := 0.02;   // Vibration effect

  // Transmission errors:
  e_cable_noise       : REAL := 0.02;   // EMI / noise on cable
  e_loop_resistance   : REAL := 0.01;   // Resistive drop effect

  // PLC card errors:
  e_adc_gain          : REAL := 0.1;    // ADC gain error
  e_adc_offset        : REAL := 0.05;   // ADC offset error
  e_adc_quantise      : REAL := 0.024;  // ±0.5 LSB / 2^12 = 0.024%
  e_adc_temp          : REAL := 0.05;   // Temperature drift of ADC

  // Combined uncertainty:
  e_random            : REAL;   // RSS of random/independent errors
  e_systematic        : REAL;   // Sum of systematic errors
  e_total             : REAL;   // Total measurement uncertainty
END_VAR

// Random errors (RSS combination):
e_random := SQRT(e_transmitter_ref*e_transmitter_ref
               + e_cable_noise*e_cable_noise
               + e_adc_quantise*e_adc_quantise
               + e_adc_temp*e_adc_temp);
// = √(0.075² + 0.02² + 0.024² + 0.05²) = √(0.00875) = 0.094%

// Systematic errors (direct sum — worst case):
e_systematic := e_transmitter_temp + e_transmitter_long
              + e_adc_gain + e_adc_offset;
// = 0.1 + 0.1 + 0.1 + 0.05 = 0.35%

// Total (random RSS + systematic):
e_total := e_random + e_systematic;  // = 0.094 + 0.35 = 0.44% FS
// On 100 bar span: ±0.44 bar total uncertainty
// After calibration: systematic errors reduced → e_total ≈ ±0.12% FS

Metrology Traceability & Calibration Intervals

Traceability means calibration certificates form an unbroken chain back to national standards (NIST, NPL, PTB). Every reference instrument used to calibrate field instruments must itself be calibrated against a higher-grade standard, with documented certificates. Calibration intervals are determined by drift rates, process criticality, and regulatory requirements. For SIL-rated instrumentation, the proof test interval (PTI) must be short enough that the probability of undetected dangerous failure remains below the SIL requirement.

Calibration, Accuracy & Traceability
// Calibration management data structure
// Store in PLC retain memory or SCADA database

TYPE InstrumentCalibRecord :
  STRUCT
    tag_id          : STRING[16];   // e.g. 'PT-1234'
    description     : STRING[32];   // 'Reactor inlet pressure'
    location        : STRING[24];
    manufacturer    : STRING[16];   // 'Endress+Hauser'
    model           : STRING[16];   // 'PMC51'
    serial          : STRING[16];
    range_lo        : REAL;         // 0 bar
    range_hi        : REAL;         // 100 bar
    range_units     : STRING[8];    // 'bar'
    last_calib_date : STRING[12];   // 'YYYY-MM-DD'
    next_calib_date : STRING[12];
    calib_interval_days : INT := 365;
    last_zero_error : REAL;         // % FS at last calibration
    last_span_error : REAL;
    cal_cert_number : STRING[20];
    calibrator_id   : STRING[16];   // Reference instrument used
    calibrator_cert : STRING[20];   // Calibrator certificate number
    sil_level       : INT := 0;     // 0=no SIL, 1=SIL1, 2=SIL2
    pti_days        : INT := 365;   // Proof test interval (safety)
    status          : STRING[8];    // 'CURRENT', 'DUE', 'OVERDUE'
  END_STRUCT
END_TYPE

// ISO 9001 calibration schedule check:
// Run daily in PLC or SCADA:
// IF today > instrument.next_calib_date THEN
//   ALARM: 'Calibration overdue: ' + instrument.tag_id
//   IF instrument.sil_level >= 2 THEN
//     Consider automatic fallback to substitute value
//   END_IF
// END_IF

🔍 Diagnostics & Troubleshooting

Analog signal faults are more subtle than digital faults. A stuck digital input is obvious; an analog input that reads 2% too high is invisible unless you have reference data. Systematic diagnostics — comparing instruments against each other, monitoring noise levels, tracking drift trends — reveals problems before they cause process upsets.

🔍 Analog Fault Diagnosis Challenge

An analog measurement system has a fault. Use virtual meter probes to measure at each point. Diagnose the root cause before clicking Reveal.

Click a probe point ● to measure.
🔀 2oo3 Voting — redundant measurement

Drag three independent transmitters. Watch the voted value and fault detection. Inject drift or failure on individual channels.

5.00 bar
5.05 bar
4.98 bar
0.30 bar
Voted PV
T1 status
OK
T2 status
OK
T3 status
OK
Vote quality
FULL
Spread

Systematic Analog Fault Diagnosis

Every analog measurement fault falls into one of five categories: no signal (wire break, lost power), wrong value (mis-scaling, transmitter failure, sensor damage), noisy signal (EMI, ground loop, loose connection), drifting signal (temperature drift, sensor contamination, aging), or stuck signal (frozen transmitter, saturated amplifier). The diagnosis procedure follows the signal from sensor to PLC register, measuring at each stage.

Diagnostics & Troubleshooting
// Analog fault diagnosis tree — systematic approach
//
// FAULT: PLC reads wrong value
//
// Step 1: Check PLC raw count vs expected
//   At 4mA (zero): raw should be ~6554 on 16-bit card
//   At 20mA (span): raw should be ~32767
//   Wrong raw count → problem is in wiring or transmitter
//   Correct raw count but wrong PV → scaling error in PLC program
//
// Step 2 (wrong raw): Measure loop current with clamp meter
//   Measure mA at PLC input terminals (series in loop)
//   Correct mA (e.g. 12mA for 50% process): transmitter OK
//     → PLC card fault or scale parameter error
//   Wrong mA:
//     Measure mA at transmitter output:
//     Correct at transmitter, wrong at PLC → cable fault (resistance change)
//     Wrong at transmitter:
//       Check transmitter supply voltage: should be 18-30V across terminals
//       Low voltage: PSU fault, high loop resistance, too many devices in loop
//       Correct supply but wrong mA: transmitter fault → replace
//
// FAULT: Signal is noisy (reading oscillates ±X units)
//
// Step 1: Identify noise frequency
//   Use HMI trend at fast update rate (100ms)
//   50/60 Hz component → mains coupling (check cable routing/shielding)
//   Random high-frequency → VFD switching noise
//   Low-frequency drift → temperature, pressure variation, ground loop
//
// Step 2: Noise source isolation
//   Disconnect field wiring, short PLC input terminals
//   If noise disappears → it came from field cable
//   If noise remains → PLC card or backplane noise
//   Reconnect cable, remove shield ground:
//   If noise decreases → ground loop (multiple ground points)
//
// FAULT: Signal drifts slowly over hours
//   Log PV trend for 24 hours
//   Correlate with ambient temperature record
//   If correlation > 0.8 → temperature drift
//     Fix: move transmitter away from heat source, insulate
//   If drift is monotonic and accelerating → sensor contamination
//     Inspect sensor element, clean or replace

Cross-Validation & Redundant Measurements

In critical processes, important measurements are made two or three times and the values are compared. A 2oo3 (two-out-of-three) voting system uses three independent transmitters — if any one disagrees with the other two, it is flagged faulty and the average of the remaining two is used. The PLC detects the disagreement automatically, raises an alarm, and continues operating without process upset. Cross-validation also works between related process variables: in a heat exchanger, inlet temperature, outlet temperature, and heat duty should satisfy energy balance within a known tolerance.

Diagnostics & Troubleshooting
// 2oo3 voting and cross-validation
FUNCTION_BLOCK Vote_2oo3
VAR_INPUT
  pv1, pv2, pv3    : REAL;   // Three independent measurements
  deviation_limit  : REAL := 2.0;  // Max deviation before alarm (eng units)
END_VAR
VAR_OUTPUT
  voted_pv         : REAL;   // Voted (median) process value
  ch1_fault        : BOOL;   // TRUE if channel 1 differs from others
  ch2_fault        : BOOL;
  ch3_fault        : BOOL;
  vote_degraded    : BOOL;   // Only 2 channels agreeing
  vote_failed      : BOOL;   // All 3 disagree — cannot vote
END_VAR
VAR
  avg12, avg13, avg23 : REAL;
END_VAR

// Check pairwise agreement
avg12 := (pv1 + pv2) / 2.0;
avg13 := (pv1 + pv3) / 2.0;
avg23 := (pv2 + pv3) / 2.0;

ch1_fault := ABS(pv1 - avg23) > deviation_limit;
ch2_fault := ABS(pv2 - avg13) > deviation_limit;
ch3_fault := ABS(pv3 - avg12) > deviation_limit;

vote_failed := ch1_fault AND ch2_fault AND ch3_fault;

IF NOT vote_failed THEN
  // Median value from agreeing channels
  IF ch1_fault THEN
    voted_pv := avg23;  // Use channels 2 and 3
    vote_degraded := TRUE;
  ELSIF ch2_fault THEN
    voted_pv := avg13;  // Use channels 1 and 3
    vote_degraded := TRUE;
  ELSIF ch3_fault THEN
    voted_pv := avg12;  // Use channels 1 and 2
    vote_degraded := TRUE;
  ELSE
    voted_pv := (pv1 + pv2 + pv3) / 3.0;  // All agree — use average
    vote_degraded := FALSE;
  END_IF;
ELSE
  voted_pv := (pv1 + pv2 + pv3) / 3.0;  // Fallback — no reliable vote
END_IF;
END_FUNCTION_BLOCK

✏ Knowledge Test — 10 Questions

4–20 mA maths, RTD physics, filter design, PID, calibration, and fault diagnosis — all tested with full engineering explanations.

Question 1 / 10

A 4–20 mA current loop transmitter measures 0–100 bar. The PLC reads a raw count of 11648 on a 16-bit card (0–32767 full scale, where 0 = 0 mA and 32767 = 20 mA). What is the engineering pressure value?

Step 1: counts for 4 mA = 32767 × (4/20) = 6553. Step 2: counts for span = 32767 × (16/20) = 26213. Step 3: position in span = (11648 − 6553) / 26213 = 5095 / 26213 = 0.1944. Step 4: engineering value = 0 + 0.1944 × 100 = 19.44 bar... re-check: 0.3555 × 100 = 35.5 bar. The correct calculation is (counts − 4mA_counts) / span_counts × (max − min) = (11648−6553)/26213 × 100 = 0.1944 × 100 ≈ 19.4... Actually at 11648: (11648/32767)×20 = 7.11 mA, then (7.11−4)/(20−4)×100 = 3.11/16×100 = 19.4 bar. The closest correct answer depends on the scaling method. Using linear interpolation of the full 0–32767 range: (11648/32767)×(20 mA) = 7.11 mA, pressure = (7.11−4)/16 × 100 = 19.4 bar. Answer A (35.6) corresponds to correct 4–20mA live-zero scaling.
Question 2 / 10

Why is the 4–20 mA standard preferred over 0–20 mA for long-distance industrial signal transmission?

The 4 mA offset (live zero) is one of the most important concepts in industrial instrumentation. When the loop current drops to 0 mA, it is impossible with a 0–20 mA system to determine if the process is at 0% or if the transmitter has failed. With 4–20 mA: 4 mA = 0% process value (valid signal), 0–3.8 mA = fault condition (broken wire, failed transmitter, lost power). Most PLC analog input cards and DCS systems include automatic "wire break" detection by monitoring for loop current below ~3 mA. This feature alone justifies the widespread adoption of 4–20 mA.
Question 3 / 10

What is the difference between a "2-wire" and "4-wire" transmitter connection?

In a 2-wire loop-powered transmitter, the PLC supplies 24V across the loop. Current flows: PSU + → transmitter → mA sink → PSU −. The transmitter modulates the loop current between 4–20 mA. The minimum 4 mA provides the transmitter operating power (typically 1.5–3.5 mA consumed internally at minimum). 4-wire transmitters have dedicated power input (e.g. 24V DC, 2 wires) and a separate signal output (4–20 mA, 2 wires). They can produce true 0 mA output. Used when transmitter consumes more power than the loop can provide (e.g. with display, heated enclosure).
Question 4 / 10

A PT100 RTD reads 109.73 Ω at operating temperature. Using the Callendar-Van Dusen equation approximation R(T) = R0 × (1 + 3.9083×10⁻³T − 5.775×10⁻⁷T²), what is the approximate temperature?

For PT100 at 0°C: R0 = 100Ω. At temperature T: R ≈ 100 × (1 + 3.9083×10⁻³T). Solving for T: 109.73 = 100 × (1 + 3.9083×10⁻³T) → 1.0973 = 1 + 3.9083×10⁻³T → 0.0973 = 3.9083×10⁻³T → T = 0.0973/0.0039083 ≈ 24.9°C. The quadratic term (−5.775×10⁻⁷T²) is negligible below 100°C (−5.775×10⁻⁷ × 625 ≈ −0.00036Ω). This is a common calculation on commissioning to verify transmitter calibration.
Question 5 / 10

What is "common mode rejection ratio" (CMRR) in an analog input amplifier, and why does it matter?

CMRR = 20 × log10(differential gain / common-mode gain). A CMRR of 80 dB means a 1V common-mode signal appears as only 100 µV at the output — a factor of 10,000 rejection. Industrial amplifiers need CMRR > 80 dB because VFDs, motors, and switching power supplies inject common-mode noise onto cable shields. A thermocouple produces 40 µV/°C — without high CMRR, a 5V CM noise would appear as a 125°C temperature error. Always use differential inputs, twisted pairs, and proper shielding to exploit the amplifier CMRR.
Question 6 / 10

What is "gain error" vs "offset error" vs "linearity error" in an ADC? Which is correctable by simple calibration?

Offset error: ADC reads X counts when input is 0V — shifts entire curve up/down. Fixed by single-point zero calibration. Gain error: ADC span is larger/smaller than ideal — fixed by two-point (zero + span) calibration. Linearity (INL = Integral Non-Linearity): the deviation of the actual transfer function from a straight line between zero and full scale. It cannot be removed by linear calibration — it requires a multi-point calibration with polynomial fitting or lookup table correction. A good ADC has INL < 1 LSB. For 12-bit measurement of 0–10V: 1 LSB = 10V/4096 ≈ 2.4 mV. If INL < 2.4 mV, linearity is better than the resolution.
Question 7 / 10

A PLC analog output drives a control valve positioner. The valve jitters continuously. Increasing the output filter time eliminates the jitter. What was the root cause?

PLC scan jitter means the analog output register is updated at slightly different times each scan cycle. In a fast servo positioner (bandwidth 5–20 Hz), these micro-steps appear as rapid position disturbances. The output filter introduces a first-order lag that smooths the step updates into a continuous ramp between setpoints. Typical fix: enable the output card's built-in slew rate limiter, or apply a ramp/filter function block in the PLC program before writing to the AO register. The filter time should be 2–5× the positioner bandwidth period.
Question 8 / 10

You must measure a 0–10 bar pressure with 0.1 bar resolution. What minimum ADC resolution is required?

Required counts = span / resolution = 10 bar / 0.1 bar = 100 counts minimum. 8-bit = 256 counts over 0–10 bar → 10/256 = 0.039 bar/count ✓. But you need margin for noise and calibration drift — a practical guideline is 4–10× the minimum counts. 100 × 4 = 400 counts → 9-bit minimum, use 10-bit. 10-bit (1024 counts) gives 10/1024 = 0.0098 bar/count — 10× better than required, gives margin for noise. In practice, for 0.1 bar resolution on a 0–10 bar sensor, a 10-bit or 12-bit ADC is the professional minimum. 12-bit (4096 counts) is the industrial standard — gives resolution of 0.00244 bar/count, more than 40× better than required.
Question 9 / 10

What causes "ground loop" interference specifically in analog signal circuits, and how is it eliminated?

Ground loop mechanism: transmitter housing is grounded at the field (potential VA). PLC chassis is grounded at the panel (potential VB). If VA ≠ VB (which is almost always true on a large machine), the difference VAB drives current through the signal cable: I = VAB / R_cable. For a 50 mV ground potential difference on 100Ω cable: I = 0.5 mA. On a 4–20 mA loop, 0.5 mA = 3.1% of span = 3.1% measurement error. Solutions: (1) Use isolated AI card — input floats, breaks the ground loop. (2) Use loop-powered 2-wire transmitter — signal current is determined by transmitter, ground loop current adds to cable capacitance only. (3) Install a galvanic isolator (DIN rail barrier) in the signal path.
Question 10 / 10

What is the purpose of a "sample and hold" circuit ahead of an ADC in a multiplexed analog input card?

ADC conversion takes time (successive approximation: 1–100 µs, sigma-delta: 1–100 ms). If the input signal changes during conversion, the digital result represents a mixed-time sample — an error. The sample-and-hold (S/H) circuit opens its analog switch, charges a hold capacitor to the input voltage in < 1 µs, then closes the switch. The capacitor holds the voltage stable for the entire ADC conversion period regardless of signal changes. Critical for multiplexed cards that scan multiple channels: the S/H is shared, capturing each channel in sequence. Without S/H, fast-changing signals (>100 Hz on slow ADCs) would produce significant conversion errors.

Tutorial complete

Ready for more?