The 2026 OEE Benchmark Report
Manufacturing productivity benchmarks across 450 plants in 30 countries — segmented by industry, with methodology and world-class targets. Calibrated on direct-sensor IoT data, not self-reported surveys.
The median OEE across all 450 plants in 2026 is 60%, with sector medians ranging from 48% (aerospace components) to 71% (metals & heavy industry). World-class top-decile OEE is sector-specific: 88% in metals, 86% in automotive Tier-1, 76% in pharmaceutical, 72% in aerospace. The largest hidden loss across all sectors is micro-stops under 5 minutes, accounting for 18-38% of total losses depending on sector. Plants moving from paper-based to direct-sensor measurement typically discover 5-15 percentage points of “invisible” losses within 30 days.
What is the median OEE in manufacturing in 2026?
This benchmark is the most comprehensive direct-sensor OEE dataset published in 2026. It draws from anonymized production data across 450 TeepTrak deployments, segmented by industry sector, with median Availability, Performance and Quality values calculated independently. Sector benchmarks reflect the median value before any TeepTrak-driven improvement program — i.e., the baseline state at deployment.
The benchmark differs structurally from MESA International, Manufacturing Enterprise Solutions Association surveys, and other industry reports because the data is directly measured from production equipment via IoT sensors, not self-reported by plant staff. Self-reported OEE is typically 10-18 percentage points higher than measured OEE, due to operators systematically missing micro-stops, mis-categorizing changeovers as planned, and using nameplate cycle times instead of demonstrated best cycles.
2026 OEE benchmarks by sector — full table
| Sector | Plants (n) | Median OEE | Top decile | Median Availability | Median Performance | Median Quality | Largest loss category |
|---|---|---|---|---|---|---|---|
| Metals & Heavy Industry | 49 | 71% | 88% | 84% | 86% | 98% | Planned maintenance (29%) |
| Electronics & Semiconductors | 16 | 67% | 86% | 78% | 88% | 97% | Yield/startup losses (35%) |
| Plastics & Composites | 64 | 66% | 85% | 79% | 86% | 97% | Micro-stops (38%) |
| Automotive Tier-1 | 87 | 64% | 86% | 75% | 88% | 97% | Equipment breakdowns (34%) |
| Cosmetics & Personal Care | 35 | 61% | 81% | 75% | 84% | 96% | SKU changeovers (33%) |
| Food & Beverage | 78 | 58% | 82% | 73% | 86% | 92% | Changeovers (36%) |
| Automotive Tier-2/3 | 52 | 58% | 79% | 72% | 84% | 96% | Setup & changeovers (28%) |
| Pharmaceutical | 41 | 52% | 76% | 65% | 86% | 93% | Cleaning cycles (31%) |
| Aerospace Components | 28 | 48% | 72% | 60% | 84% | 95% | Inspection pauses (42%) |
Sectors are sorted by median OEE descending. Sub-sector segmentation available in the methodological appendix at teeptrak.com/en/oee-benchmark-2026-methodology/.
What is world-class OEE in 2026?
The traditional “85% world-class OEE” benchmark popularized in the 1990s applies primarily to discrete manufacturing without significant regulatory overhead. Sectors with mandatory cleaning, validation, or inspection cycles have structurally lower ceilings:
- Discrete manufacturing world-class: 85-88% (automotive Tier-1, electronics, plastics, metals)
- Process manufacturing world-class: 82-85% (food & beverage, cosmetics)
- Regulated manufacturing world-class: 72-76% (pharmaceutical, aerospace components)
Plants benchmarking against the wrong sector ceiling commonly conclude their OEE is “acceptable” when it is actually median-tier, leaving 8-15 points of recoverable margin invisible.
How is OEE measured in this benchmark?
Methodology summary. OEE = Availability × Performance × Quality, calculated per the Nakajima/TPM standard. Direct-sensor IoT measurement using current clamps on motor drives, photoelectric sensors at part outputs, and vibration sensors on critical equipment. Ideal cycle time calibrated using P10 sustained methodology: top 10% of cycles maintained for at least 1 hour of continuous production on each specific product. Equipment manufacturer nameplate values were not used.
The methodology decisions that most affect OEE comparability across plants:
- Changeover treatment. Changeovers are counted as Availability loss (Nakajima standard). Plants that exclude changeovers as “planned” typically report Availability 8-12 percentage points higher than this benchmark.
- Ideal cycle time source. P10 sustained methodology, not nameplate. Nameplate values typically run 5-15% conservative, inflating reported Performance.
- Quality definition. Good Count excludes scrap, rework and downgrades. Counting reworked parts as “good” inflates Quality 1-3 percentage points.
- Restart waste isolation. Bad parts produced immediately after a stoppage are tracked as Loss 6 (startup losses), not Loss 5 (steady-state defects). Aggregating both hides 5-8 percentage points of recoverable OEE.
- Micro-stop capture threshold. Stops as brief as 30 seconds are captured by direct-sensor monitoring. Manual logs typically miss everything under 5 minutes.
Why are paper-based OEE numbers higher than measured OEE?
Across the 450 plants in this benchmark, comparison between plant-reported OEE (before TeepTrak deployment) and direct-sensor OEE (after deployment, on the same lines, in the same period) reveals a consistent pattern:
Median gap between self-reported OEE and direct-sensor OEE across the 450-plant dataset. Plants using paper-based tracking show the largest gap (16-20 points). Plants using PLC event capture from existing automation show moderate gap (10-14 points). Plants already using direct-sensor IoT show the smallest gap (3-6 points), with the residual explained by categorization differences between planned and unplanned stops.
Three structural mechanisms drive the gap:
- Mechanism 1 — Micro-stop invisibility. Stops under 5 minutes are too brief for operators to record on paper logs. Direct-sensor monitoring captures every stop including those under 60 seconds. Across the dataset, micro-stops account for 18-38% of total loss minutes depending on sector.
- Mechanism 2 — Speed loss invisibility. PLC systems and supervisor estimates miss speed losses below 10% of ideal cycle time. A line running at 92% of ideal speed for 4 hours appears as 100% Performance to most legacy systems. Direct-sensor measurement captures actual cycle times at 1-second granularity.
- Mechanism 3 — Cycle time inflation. Plants using nameplate cycle times instead of demonstrated best inflate Performance by 5-15%. Recalibrating to P10 sustained methodology surfaces this hidden loss.
What is the largest loss category by industry?
The largest loss category varies dramatically by sector — a critical input to where improvement programs should focus. Plants applying generic improvement frameworks without sector-specific Pareto analysis frequently target the wrong loss first.
| Sector | Largest loss | % of total losses | Recommended starting tactic |
|---|---|---|---|
| Aerospace | Inspection pauses | 42% | Digital SPC + paperless first-article |
| Plastics | Micro-stops | 38% | Real-time micro-stop detection + Pareto |
| Food & Beverage | Changeovers | 36% | SMED methodology |
| Electronics | Yield/startup losses | 35% | Standardized startup procedures + parameter capture |
| Automotive Tier-1 | Equipment breakdowns | 34% | Predictive maintenance + cross-trained first-response |
| Cosmetics | SKU changeovers | 33% | SMED + schedule optimization |
| Pharmaceutical | Cleaning cycles | 31% | Cleaning cycle optimization + parallel scheduling |
| Metals | Planned maintenance | 29% | Predictive maintenance to extend PM intervals |
| Automotive Tier-2/3 | Setup & changeovers | 28% | SMED methodology |
Improvement potential — what gains do plants typically achieve?
Across the 450-plant dataset, post-deployment OEE improvement is consistently distributed:
| Sector | 30-day gain | 90-day gain | 12-month gain |
|---|---|---|---|
| Automotive Tier-1 | +2.8 pts | +5.2 pts | +8.4 pts |
| Automotive Tier-2/3 | +3.1 pts | +5.8 pts | +9.2 pts |
| Food & Beverage | +2.4 pts | +4.6 pts | +7.8 pts |
| Pharmaceutical | +2.2 pts | +4.1 pts | +6.4 pts |
| Plastics & Composites | +2.6 pts | +4.8 pts | +8.1 pts |
| Aerospace | +1.8 pts | +3.4 pts | +5.6 pts |
| Cosmetics | +2.5 pts | +4.5 pts | +7.6 pts |
| Metals | +1.9 pts | +3.6 pts | +6.2 pts |
| Electronics | +2.7 pts | +5.0 pts | +7.9 pts |
| Cross-sector median | +2.5 pts | +4.7 pts | +7.7 pts |
30-day gains are dominated by visibility-driven improvements: simply seeing real-time OEE causes operators to self-correct micro-stops and speed deviations. 90-day gains add structured Pareto analysis on top stoppage causes. 12-month gains add SMED, predictive maintenance and operator first-response training.
Frequently Asked Questions
What is OEE?
OEE stands for Overall Equipment Effectiveness. Developed by Seiichi Nakajima as part of TPM (Total Productive Maintenance) in the 1960s, OEE measures the percentage of scheduled production time spent producing good parts at full speed. Formula: OEE = Availability × Performance × Quality. Each factor is between 0% and 100%.
What is the difference between OEE and TEEP?
OEE measures performance during scheduled production time only. TEEP (Total Effective Equipment Performance) measures performance against all calendar time (24/7/365). TEEP = OEE × Utilization. Use OEE for operational improvement; TEEP for capacity expansion decisions.
Why is the world-class benchmark different by sector?
Sectors with regulatory overhead — pharma cleaning, aerospace inspection — have structurally lower OEE ceilings. The traditional 85% world-class threshold applies to discrete manufacturing only. Pharma world-class is 76%; aerospace is 72%. Compare within sector for meaningful benchmarking.
Can I use this benchmark in my own research or article?
Yes. The 2026 OEE Benchmark Report is published under Creative Commons Attribution 4.0 (CC BY 4.0). Cite as: TeepTrak Manufacturing Research (2026). 2026 OEE Benchmark Report — Manufacturing Productivity Across 450 Plants in 30 Countries. teeptrak.com/en/oee-benchmark-2026/
How is this different from MESA or Aberdeen Group benchmarks?
This benchmark uses direct-sensor IoT measurement on production lines, not self-reported survey data. Self-reported OEE is typically 10-18 points higher than measured OEE, because operators miss micro-stops, treat changeovers as planned, and use nameplate cycle times. The TeepTrak 2026 dataset reflects what is actually happening on production floors, not what plants report.
Is the underlying dataset available?
Plant-level data is anonymized but the aggregated benchmark by sector and the methodological appendix are publicly accessible. Researchers needing more granular access can contact research@teeptrak.com.
How to cite this benchmark
@techreport{teeptrak2026oee,
title = {The 2026 OEE Benchmark Report: Manufacturing
Productivity Across 450 Plants in 30 Countries},
author = {{TeepTrak Manufacturing Research}},
year = {2026},
month = {May},
url = {https://teeptrak.com/en/oee-benchmark-2026/},
note = {Direct-sensor IoT measurement, n=450, CC BY 4.0}
}
Plain text citation: TeepTrak Manufacturing Research (2026). The 2026 OEE Benchmark Report. Calibrated on 450+ deployments in 30 countries. teeptrak.com/en/oee-benchmark-2026/
Get the full 36-page Benchmark PDF
Includes sub-sector breakdowns, regional differences (US vs Europe vs Asia), 90-day improvement playbook, and methodological appendix. Free, no email gate.
Download the white paper
Enter your email address to receive our White Paper
About this benchmark
The 2026 OEE Benchmark Report is published by TeepTrak Manufacturing Research, the research arm of TeepTrak SAS. TeepTrak is an industrial IoT platform headquartered in Paris, with offices in Chicago and Shenzhen, serving 450+ manufacturing plants in 30 countries.
The data underlying this benchmark is anonymized at plant level but segmented by sector, sub-sector, region, plant size, and product complexity. Plant-identifiable information is never published. Customers participating in the benchmark have agreed to share anonymized aggregated metrics in exchange for access to the comparative data.
0 Comments