The next generation vision system for robotics and machines
Ocellus simplifies vision and robot programming and allows you to easily prototype with many advanced modules to achieve the desired result. It is a programming tool for wiring together hardware devices, APIs, robots and services in new ways.
- Easy to Use GUI Interface
- Many Image Processing Modules
- External Robot Controller for any robot in real-time
- Camera Agnostic
- Realtime Parameter Changes
- Multiple API
- Plugin Framework for Custom Modules
- Visual Programming
- AR Programming
- Artificial Intelligence
- Tracker support (HTC Vive, Intel T265, ArUco)
- Intrinsics callibration
- Physics engine
- Pose estimation of any object
Ocellus can reconstruct the real world in to the virtual world, which makes it possible for robots to perceive and interact with their environments in novel ways, enabling them to take on tasks that might be unthinkable.
Ocellus keeps the virtual and real world in sync by using sensors, vision and physics engine. Thanks to Augmented Reality it makes it possible for Ocellus to describe how it sees the world and by knowing the location of the user it can render it from the observers perspective.
The uniqueness of Ocellus compared to other robot and vision solutions is that it work externally to the robot controller. Work that is usually done by an internal robot controller such as path planning, obstacle avoidance, conveyor tracking and Inverse Kinematics etc is handled by Ocellus instead.
It’s only when robot, user and camera/sensor are in sync that it enables a new approach of programming. The user can see and interact with objects that are understood by Ocellus and can call actions on objects such as “Pick and place at location x”.
Blob: Find objects based on color. Supported color palettes LAB, HSV, HLS and RGB. Erode and dilate masks. Orientation detection using Principal Component Analysis or smallest rectangle.
Colorized Depth: Convert any 3D point cloud to 2D depth image
Neural Network: Multi-class instance segmentation using Mask R-CNN, UNet V2 and DeepLab v3+. Class filtering, dimension restriction, orientation detection using Principal Component Analysis or smallest rectangle. Automatically take screenshots on object detections for easy data collection. Object tracking support and set DNN score threshold limit.
Point Cloud: Module for generating static point clouds in order to locate objects on e.g a conveyor using 2D cameras
Stack: Measure stacks of boxes, overhang and quality inspection
ArUco: Detect position and orientation of ArUco markers
Shape Match: Match a contour template with another detected contour
Hough: Detect straight lines or circles
RealSense Tracker: RealSense T265 tracking camera
HTC Vive Tracker: Creates a wireless and seamless connection between your attached tools and the VIVE system
Source modules provide data to Ocellus for further processing
Web Camera: Use any web camera as image source
Gige Camera: Use any CVB compatible Gige camera as image source. Control Exposure Time, Line Rate, Gain and FPS. Support for Undistortion of images. (Require lens calibration)
RealSense: Intel RealSense camera support
Surface Scanner: Laser camera support (requires Gocator SDK)
Video Source: Use a video as data source
Image Source: Use a folder of images as data source
Media Manager: Support for recording videos and screenshots from any supported source. View and organize collected media etc.
Modules: Add remove modules. Measure CPU usage per module. Export and import module configurations or persist current state.
Monitors: Control center for viewing cameras and other output from modules.
Neural Networks: Manage and deploy Neural Network to the Neural Network module
Intrinsics: Manage intrinsics and callibrate cameras
Responsible for simulating the motions and reaction of objects as if they were under the constraints of real-world. By using sensors such as cameras and laser scanners in combination with AI and other vision algorithms, it is possible to bring real-world objects in to the physics engine in order to solve various problems.
Ocellus see and interprets the world around it by using sensors and cameras in order to create virtual twins of objects around. By using VR we can enter the virtual world to see and understand how Ocellus understands the real world.
- Web Service support
- TCP/IP ASCII Support
- gRPC Support
- ABB RAPID Support
- ABB Pick Master support
Supported cameras and sensors
- Intel RealSense D435/D415/D435i/D455/L515
- Azure Kinect DK
- Common Vision Blox, see the following list of standard and special hardware to find out what is CVB compatible. In addition to the listed hardware CVB supports all GigE Vision and USB3 Vision GeniCam compliant cameras from different manufacturers. CVB Supported Hardware
- Gocator laser 3D profilers
- Line Scanners via CVB
- Any web camera