How Operations Consultants Evaluate MES and OEE Vendors for Client Recommendations

Écrit par Équipe TEEPTRAK

Apr 22, 2026

lire

How Operations Consultants Evaluate MES and OEE Vendors for Client Recommendations

Operations consultants at top-tier firms evaluate dozens of MES and OEE vendors every year. Some evaluations are formal RFPs run on behalf of enterprise clients; most are informal filters applied during diagnostic engagements to identify the two or three vendors worth recommending to a given client. The informal evaluation is where most consultant-vendor relationships actually form, and it is where vendors either make the shortlist or get filtered out before the client conversation even begins.

This article documents the informal evaluation framework that experienced operations consultants use. It covers the criteria that actually predict deployment success — as opposed to the criteria that look important on a vendor matrix but do not correlate with outcomes — and it closes with the specific failure modes that cause otherwise-good vendors to drop off shortlists. For vendors reading this, the article is also a guide to what to prepare before a consultant evaluation conversation.

François Coulloudon, Founder & CEO TeepTrak, ex-BCG

François Coulloudon · Founder & CEO, TeepTrak · Ex-BCG (Paris Operations practice, 2011–2015) · INSEAD MBA · Polytechnique X2004
This article reflects perspectives built during 4 years at BCG working on LEAN programs, operational performance and cost reduction missions across Europe, Asia and the Americas.

The five criteria that actually predict deployment success

Six years of TeepTrak deployments at 450+ plants, plus conversations with consultants at BCG, McKinsey, Bain, Kearney, Roland Berger, Sia Partners, and several boutique operations firms, converge on five criteria that predict whether a given vendor’s deployment at a given client will succeed.

Criterion 1: time from arrival to first reliable data. The single strongest predictor. Vendors that can produce validated OEE data within 48 hours of site arrival consistently succeed. Vendors that require two to four weeks of data collection before producing validated output consistently struggle, because the time lag erodes the client’s confidence that the deployment will actually work. The intermediate category — vendors that can produce data in five to ten days — is the hardest to evaluate; they often pass the POC test but then stall when scaling across multiple lines.

Criterion 2: operator interface entry rate in production. Vendors whose operator interface sustains 85%+ entry rate over a 90-day observation window consistently succeed. Vendors below 60% fail regardless of how good the rest of the system is, because the data is unreliable. This criterion is hard to evaluate from a product demo — every demo shows a clean interface — but it is observable from reference calls: ask three existing customers what the sustained entry rate has been over the last year. If the vendor cannot provide those references or the customers hedge, the vendor has a problem.

Criterion 3: brownfield equipment compatibility. Most manufacturing clients have equipment from the 1990s and 2000s coexisting with new equipment. Vendors that require PLC compatibility or specific OEM partnerships will fail on the legacy equipment, which is typically where the OEE losses are concentrated. External-sensor approaches pass this criterion; PLC-dependent approaches do not.

Criterion 4: coexistence with existing MES / ERP / quality systems. Clients already have a stack. A vendor whose sales narrative is “rip out your existing tools and start over” loses the evaluation. Vendors whose narrative is “we are a specialized layer that integrates with your existing stack and does the OEE piece better than anyone else” win. Consultants verify this by asking the vendor to describe specific integrations with the client’s actual ERP and MES brands.

Criterion 5: reference client quality and accessibility. Vendors with enterprise references at Fortune 1000 manufacturers pass the partner-approval filter at tier-one consulting firms. Vendors without them do not, regardless of product quality. More importantly, the references must be accessible: the consultant should be able to get a 30-minute reference call with a named operations leader at a named client, not a sales-curated testimonial. Vendors who hedge on reference access have weak references.

Free Download — Manufacturing Dashboard Design Guide
Tier-1 / Tier-2 / Tier-3 dashboard frameworks used by BCG, McKinsey and Bain operations teams.

Free Download

Instant download. No email confirmation needed.

Criteria that look important but do not predict outcomes

Several criteria appear on vendor RFP matrices but do not correlate with deployment success. Consultants who weight them heavily end up with worse recommendations.

Product feature completeness. A vendor with 150 features is not better than a vendor with 30 features if the client will only use 15 of them. Feature count optimizes for evaluation-committee consensus, not for operational outcome. The vendors with the most features are usually the heaviest MES platforms, which fail on criterion 1 and criterion 3 above. Consultants should resist the feature-matrix-driven evaluation approach unless the client truly needs the full feature set.

Cloud versus on-premise architecture. Most of the remaining manufacturing world does not care, as long as the vendor supports both options and has defensible answers on data sovereignty and uptime. Cloud-only vendors lose the regulated clients (pharma, defense); on-premise-only vendors lose the clients who do not want to run their own IT. The answer is “both, client choice” — and a vendor that cannot offer both has a scalability problem.

AI and machine learning claims. Every vendor now claims AI. Most of the claims are thin. The evaluation question is not “does the vendor have AI” but “does the AI output drive specific operational decisions that the client can execute on.” A vendor whose AI produces a weekly anomaly report that nobody reads has worse AI than a vendor with basic pattern recognition that triggers a specific improvement action. Consultants should probe beyond the AI marketing narrative to the actual workflow integration.

Vendor size and global footprint. This criterion matters for enterprise clients with global operations, but it is often weighted too heavily for mid-market clients. A 100-person vendor that deploys well is a better recommendation for a mid-market client than a 10,000-person vendor that deploys poorly because the mid-market client is not important enough to the larger vendor to get priority attention. Consultants should evaluate size relative to client need, not in absolute terms.

The specific failure modes that drop vendors off shortlists

Beyond the formal criteria, there are recurring vendor behaviors that cause experienced consultants to drop the vendor from future shortlists. Any one of these is usually disqualifying.

Inability to demonstrate real client data in the sales conversation. A vendor that cannot show the consultant a real dashboard with real client OEE data (anonymized is fine) has a transparency problem. Every serious vendor can do this within two emails of the initial contact. Vendors that delay, make excuses, or only share marketing-produced dashboards have something to hide.

Oversold timeline commitments. Vendors that commit to timelines they cannot actually hit — typically “full deployment in two weeks” when the realistic timeline is six — damage their consultant relationships permanently. Realistic timelines that slightly exceed client expectation, followed by delivery on or ahead of the committed date, build trust. The opposite destroys it.

Pricing opacity. Vendors who refuse to give a price range in the evaluation conversation, or who quote wildly different prices to different clients for similar deployments, signal that they are willing to optimize for discriminatory pricing rather than consistent value. Consultants recommend vendors whose pricing they can predict; they avoid vendors whose pricing feels like a negotiation game.

Defensive posture on limitations. Every tool has limitations. Vendors who acknowledge their limitations candidly — “we are not the right choice for clients who need X” — earn consultant trust. Vendors who claim they can do everything, including things outside their actual capability, lose consultant trust the first time a client discovers the limitation that was not disclosed. The paradox is that vendors who are honest about what they do not do receive more recommendations, not fewer.

Slow response on reference requests. The consultant asks for three reference customers. The vendor takes two weeks to provide them, or provides customers who are not comparable to the client being evaluated. This pattern almost always indicates the vendor does not actually have many references willing to talk, or that the references are mid-market when the client is enterprise. Either way, it is a disqualifying signal.

A practical evaluation sequence for a consulting engagement

When operations consultants need to recommend an OEE vendor to a client during a diagnostic, the practical sequence takes about two weeks and filters from roughly a dozen candidates to two or three shortlisted vendors.

Week one, day 1-3: initial contact with eight to twelve vendors through LinkedIn, direct email, or mutual introductions. Request a 30-minute demo and a reference customer list. Filter out vendors who do not respond within 48 hours or who demand a long discovery call before any demo.

Week one, day 4-7: demo calls with the six to eight vendors who passed the first filter. Evaluate against the five core criteria above. Filter out vendors who cannot show real client data, whose demo produces more questions than answers, or who seem to be pitching a different product than the one the client needs.

Week two, day 8-10: reference calls with the three to four remaining vendors’ customers. These calls are the real evaluation. Ask about sustained entry rates, actual time-to-first-data, integration experience with the client’s ERP and MES brands, and what the vendor does when things go wrong. Filter to the top two or three.

Week two, day 11-14: 48-hour POC with the top one or two vendors. This is the final filter and often produces a clear winner. By the end of week two, the consulting team has a recommendation backed by real data at the actual client site, not a vendor matrix assembled from marketing materials.

Why this framework favors certain vendor types

The framework systematically favors specialized lightweight OEE vendors over heavy MES platforms for the 80% of manufacturing clients who do not need the full MES capability set. This is not a bias; it is a consequence of the criteria that predict deployment success.

Heavy MES platforms optimize for feature completeness, enterprise pedigree, and global footprint. They underperform on time-to-first-data, operator interface quality, and brownfield compatibility. For the 20% of clients who truly need the full MES capability — pharma, aerospace, defense — the MES platforms are the right answer despite their deployment difficulty. For the remaining 80%, lightweight OEE is the right answer.

TeepTrak’s positioning in this framework is specific: it is a specialized lightweight OEE vendor with enterprise pedigree (Stellantis, Alstom, Hutchinson, Kraft Heinz, Essilor), a 48-hour time-to-first-data commitment that is actually deliverable, an operator interface with documented 90%+ sustained entry rates, and a PLC-free architecture that works on brownfield equipment. It is not the right answer for every client, but when the consulting team’s criteria align with the profile above, it usually makes the shortlist.

External references: Manufacturing Execution System — Wikipedia · MESA International · OEE — Wikipedia

Related reading: Why strategy consultants recommend lightweight OEE over MES · What 4 years at BCG taught me about OEE design · The enterprise client pattern behind TeepTrak adoption

Consulting with a Manufacturing Client?
Our team has supported BCG, McKinsey, Bain, Kearney and Roland Berger engagements across 30+ countries. Free 48h POC on any client site — no client commitment required.
Discuss a Client POC

Recevez les dernières mises à jour

Pour rester informé(e) des dernières actualités de TEEPTRAK et de l’Industrie 4.0, suivez-nous sur LinkedIn et YouTube. Vous pouvez également vous abonner à notre newsletter pour recevoir notre récapitulatif mensuel !

Optimisation éprouvée. Impact mesurable.

Découvrez comment les principaux fabricants ont amélioré leur TRS, minimisé les temps d’arrêt et réalisé de réels gains de performance grâce à des solutions éprouvées et axées sur les résultats.

Vous pourriez aussi aimer…

0 Comments