Skip to content
OMNIS VISION

Your dock has eyes now.

12 AI models. 44 API routes. Auto check-in from camera. Closed-loop verification. Training pipeline. Photo evidence chain. All on-device.

12 MODELS • 44 ROUTES • 14 TABLES
TensorFlow.jsSIGNALONECOMPLY
0
AI Models
Production-ready
0
API Routes
Full CRUD + inference
0
DB Tables
Full pipeline
0
Cloud Uploads
100% on-device

Trailer # OCR

97.3%, 38ms. Auto-matches appointments. trailer-ocr

Seal Verification

99.1%, 22ms. Photo-verify vs BOL. seal-check

Pallet Count

94.7%, 45ms. Cross-ref ASN. pallet-count

PPE Detection

96.2%, 31ms. Hard hats, vests. ppe-detect

Damage Assessment

91.5%, 52ms. Pre-unload classify. damage-assess

License Plate

98.4%, 28ms. Gate auto-ID. plate-read

Load Fullness

93.8%, 41ms. Door view %. load-full

Person Tracker

97.8%, 25ms. Safety zones. person-track

Forklift Detection

95.1%, 34ms. Proximity alerts. forklift-det

Door State

99.4%, 18ms. Feeds twin. door-state

Hazmat Label

98.7%, 36ms. DOT compliance. hazmat-scan

SCAC Code

96.9%, 32ms. Carrier alpha. scac-read

AUTO CHECK-IN

Camera reads plate. Driver checks in automatically.

Camera reads plate or trailer number at gate → matches appointment → auto check-in. No QR scan needed. Gate assignment, jockey dispatch, dock allocation all automated. Logged in vision_auto_checkins.

plate-read → match appointment
auto check-in → assign door
dispatch jockey → notify dock
9-STEP PIPELINE

Every frame. Every stage.

Camera → Inference → Detection → SIGNAL → ONE → COMPLY → Twin → Alert → Learn. Every detection persisted with confidence, bbox, frame index, model version, latency.

1 Camera 2 Inference 3 Detect
4 SIGNAL 5 ONE 6 COMPLY
7 Twin 8 Alert 9 Learn
TRAINING PIPELINE

Collect. Train. Deploy.

Verified detections feed vision_training_samples. Training runs produce versioned models in vision_trained_models. Promoted models sync via vision_model_sync. Self-improving loop.

COLLECT
TRAIN
PROMOTE
DEPLOY
PHOTO EVIDENCE

Every detection has proof.

vision_photo_evidence stores photo references for seal checks, damage, pallet counts. Links detection → photo → verification → alert. Auditable chain for disputes, insurance, compliance.

detection → photo → verification
photo → evidence → audit
TASK VERIFICATION

6 endpoints. Vision verifies everything.

verify-receiving verify-pick verify-putaway cycle-count identify-sku inspect-damage. Each creates a photo evidence record with expected vs detected values.

verify-receiving
verify-pick
verify-putaway
cycle-count
identify-sku
inspect-damage
MODEL REGISTRY

Every model. Every version. Every metric.

14 tables. vision_models tracks 12 models with TP/FP/FN rates, avg latency, total inferences. GET /autonomy measures no-human-needed %. GET /costs tracks per-model costs.

trailer-ocr
97.3%
door-state
99.4%
plate-read
98.4%

No hardware. No cloud. Self-improving.

12 models. 44 routes. 14 tables. Training pipeline. Photo evidence. Autonomy scoring. All on-device via TensorFlow.js.

See Vision in Action

Not another barcode scanner.

CapabilityOMNIS VisionCognexZebraManual
AI models12Custom1-20
Auto check-in
Training pipelineCustom
Photo evidence chainManual
Task verification (6 endpoints)Manual
Autonomy scoring
WMS + twin + SIGNALNativeAPIAPI
HardwarePhone$5K+$2K+None
Per-scan cost$0LicenseLicense$0

The Full Vision Architecture

OMNIS Vision v2 is a complete computer vision architecture: 12 production AI models, 44 API routes, 14 database tables. The 9-step detection pipeline processes every camera frame through inference, persistence, SIGNAL broadcast, ONE scoring, COMPLY validation, twin bridging, alert evaluation, and learning loop feedback.

Auto check-in eliminates QR scans at facilities with gate cameras. Plate read → appointment match → driver checked in → door assigned → jockey dispatched → dock notified. Under 2 seconds from plate read to door assignment.

The training pipeline creates a self-improving system. Verified detections become training samples. Training runs produce versioned models with accuracy deltas. Promoted models sync to edge devices. Every shift makes the system more accurate.

Six verification endpoints extend vision into active task verification: receiving, picking, putaway, cycle counting, SKU identification, and damage inspection. Each creates photo evidence records linked to detections, expected values, and match results for disputes, insurance, and compliance audits.

All on-device via TensorFlow.js. Zero cloud uploads. The autonomy score tracks what percentage of operations the system handles without human input. SR&ED-eligible R&D under Canadian tax credit guidelines.

Ready to see OMNIS Vision in action?

Book a live demo and see every feature running on real warehouse data.

Book a Demo View Pricing →