
WorkrCore
The intelligence behind every pick
WorkrCore is a Physical AI platform that enables robotics to understand and interact with the world within reach.
Workr Core is made up of a series of specialized models, each built to master one dimension of industrial manipulation.
Together they form MOSAIC - a complete perception and planning system that lives on your floor, not in the cloud.
We don’t just automate, we deploy an adaptable, intelligent workforce.


​​​
Before a robot can act, it needs to see in three dimensions.
The Mesh Generation model takes raw depth and RGB camera data and builds a continuous 3D reconstruction of the workspace.
Every surface, edge, and gap, updated in real time.
It's the spatial foundation that every other model depends on.
Mesh Generation
Input
Output
Depth + RGB camera streams
Dense point cloud and surface mesh

​​​
Given a part's exact pose, this model calculates the optimal grip, approach angle, gripper width, contact points, and force distribution. It reasons about geometry and material properties, ruling out unstable configurations before the arm moves.
New parts are trained and production-ready in under three minutes.
Object Pose Estimation
Input 3D mesh + object reference geometry
Output 6-DoF pose per detected object

​​​
Given a part's exact pose, this model calculates the optimal grip, approach angle, gripper width, contact points, and force distribution. It reasons about geometry and material properties, ruling out unstable configurations before the arm moves.
New parts are trained and production-ready in under three minutes.
Strategic Grasp Planning
Input Object pose + gripper geometry
Output Ranked grasp candidates with confidence

​​​
The real world isn't static.
Parts shift, workers cross the cell, conveyors vary. This model maintains a continuous lock on every target object across frames, through brief occlusion or lighting changes, so the robot always acts on current data, not a stale snapshot.
It re-acquires targets in milliseconds.
Adaptive Object Tracking
Input Sequential frames + prior pose estimates
Output Continuous object trajectories

​​​
Shared workspaces demand real-time awareness of everything that isn't the target.
This model continuously maps the robot's body, fixtures, tooling, and dynamic obstacles, human or otherwise and flags conflicts before they happen.
It works alongside hardware safety systems so the robot slows, stops, or re-routes rather than making contact.
Intelligent Collision Avoidance
Input Full scene mesh + robot kinematic state
Output Collision-free configuration space

​​​
With a clear target, a validated grasp, and a collision-free map, this model generates the smoothest, fastest trajectory from the robot's current position to the goal and back.
It optimizes for cycle time, joint limits, and energy use simultaneously, and replans in real time when conditions change mid-motion.
Coordinated Path Planning
Input Grasp target + collision map + joint state
Output Optimized joint trajectory
Multiple Models Running Concurrently With One Shared View Of The World.
​
MOSAIC isn't a sequential pipeline that stalls if one stage is slow. Each model runs simultaneously on WorkrCore, sharing a unified spatial model of the work cell and updating together in real time.
All Of This Runs On Your Floor, No Cloud Required.
​
WorkrCore handles both training and inference at the edge, on hardware that ships with every deployment. There's no latency penalty for cloud round-trips, no dependency on your network, and no production data leaving your facility.
When you onboard a new part, all six MOSAIC models update simultaneously, in under three minutes, from a standard CAD file or a quick teach-in on the iPad.