Trailer # OCR
97.3%, 38ms. Auto-matches appointments. trailer-ocr
Seal Verification
99.1%, 22ms. Photo-verify vs BOL. seal-check
Pallet Count
94.7%, 45ms. Cross-ref ASN. pallet-count
PPE Detection
96.2%, 31ms. Hard hats, vests. ppe-detect
Damage Assessment
91.5%, 52ms. Pre-unload classify. damage-assess
License Plate
98.4%, 28ms. Gate auto-ID. plate-read
Load Fullness
93.8%, 41ms. Door view %. load-full
Person Tracker
97.8%, 25ms. Safety zones. person-track
Forklift Detection
95.1%, 34ms. Proximity alerts. forklift-det
Door State
99.4%, 18ms. Feeds twin. door-state
Hazmat Label
98.7%, 36ms. DOT compliance. hazmat-scan
SCAC Code
96.9%, 32ms. Carrier alpha. scac-read
Camera reads plate. Driver checks in automatically.
Camera reads plate or trailer number at gate → matches appointment → auto check-in. No QR scan needed. Gate assignment, jockey dispatch, dock allocation all automated. Logged in vision_auto_checkins.
Every frame. Every stage.
Camera → Inference → Detection → SIGNAL → ONE → COMPLY → Twin → Alert → Learn. Every detection persisted with confidence, bbox, frame index, model version, latency.
Collect. Train. Deploy.
Verified detections feed vision_training_samples. Training runs produce versioned models in vision_trained_models. Promoted models sync via vision_model_sync. Self-improving loop.
Every detection has proof.
vision_photo_evidence stores photo references for seal checks, damage, pallet counts. Links detection → photo → verification → alert. Auditable chain for disputes, insurance, compliance.
6 endpoints. Vision verifies everything.
verify-receiving verify-pick verify-putaway cycle-count identify-sku inspect-damage. Each creates a photo evidence record with expected vs detected values.
Every model. Every version. Every metric.
14 tables. vision_models tracks 12 models with TP/FP/FN rates, avg latency, total inferences. GET /autonomy measures no-human-needed %. GET /costs tracks per-model costs.
No hardware. No cloud. Self-improving.
12 models. 44 routes. 14 tables. Training pipeline. Photo evidence. Autonomy scoring. All on-device via TensorFlow.js.
See Vision in ActionNot another barcode scanner.
| Capability | OMNIS Vision | Cognex | Zebra | Manual |
|---|---|---|---|---|
| AI models | 12 | Custom | 1-2 | 0 |
| Auto check-in | ✓ | — | — | — |
| Training pipeline | ✓ | Custom | — | — |
| Photo evidence chain | ✓ | — | — | Manual |
| Task verification (6 endpoints) | ✓ | — | — | Manual |
| Autonomy scoring | ✓ | — | — | — |
| WMS + twin + SIGNAL | Native | API | API | — |
| Hardware | Phone | $5K+ | $2K+ | None |
| Per-scan cost | $0 | License | License | $0 |
The Full Vision Architecture
OMNIS Vision v2 is a complete computer vision architecture: 12 production AI models, 44 API routes, 14 database tables. The 9-step detection pipeline processes every camera frame through inference, persistence, SIGNAL broadcast, ONE scoring, COMPLY validation, twin bridging, alert evaluation, and learning loop feedback.
Auto check-in eliminates QR scans at facilities with gate cameras. Plate read → appointment match → driver checked in → door assigned → jockey dispatched → dock notified. Under 2 seconds from plate read to door assignment.
The training pipeline creates a self-improving system. Verified detections become training samples. Training runs produce versioned models with accuracy deltas. Promoted models sync to edge devices. Every shift makes the system more accurate.
Six verification endpoints extend vision into active task verification: receiving, picking, putaway, cycle counting, SKU identification, and damage inspection. Each creates photo evidence records linked to detections, expected values, and match results for disputes, insurance, and compliance audits.
All on-device via TensorFlow.js. Zero cloud uploads. The autonomy score tracks what percentage of operations the system handles without human input. SR&ED-eligible R&D under Canadian tax credit guidelines.
Ready to see OMNIS Vision in action?
Book a live demo and see every feature running on real warehouse data.