× Home Autonomy Products Technology News About Careers Support Contact

Brain Tech

The next generation of artificial brains for self-driving vehicles

Artificial Brains

Brain Corp is applying expertise in machine learning and computer vision to create A.I. systems capable of functioning autonomously in complex human environments.

EMMA (Enabling Mobile Machine Automation) is our navigation focused A.I. powered by BrainOS, which provides a human centered user experience representing a milestone in collaborative robotics.



BrainOS is a commercial grade, proprietary operating system that enables robots to perceive their environments, control motion, and navigate using visual cues and landmarks, while seeing and avoiding people and obstacles.

BrainOS integrates with our cloud-based Robotic Operating Center to handle complex tasks via remote control, while providing enhanced functionality. BrainOS libraries contain a range of machine learning alogrithms forming EMMA's core functionality.

BrainOS Libraries:

  • Full navigation stack
  • Static and dynamic obstacle avoidance
  • Single-pass mapping & routing technology
  • Elastic path planning with adaptive behavior
  • Adaptive learning by demonstration
  • Object and surface anomaly detection
Original Image
Modified Image

Brain Module

BrainOS runs on Brain Corp’s proprietary Brain Module, a ruggedized high-speed computer that controls an integrated sensor array with multi-layer redundancy for optimized safety.

Brain Corp provides Brain Modules to its manufacturing partners to augment their standard, manual machines. To activate robotic control, customers subscribe to Brain Corp’s autonomy service.

Advanced A.I. Research:

Brain Corp works with DARPA to develop novel machine learning algorithms that focus on visual perception and motor control. Our most recent research utilizes concepts from neuroscience to overcome limitations of existing deep-learning algorithms.

This research looks at a brain-inspired visual system that utilizes temporal input and contextual feedback. The presented concepts can be put into practice using even very simple but well-known machine learning components called Multi-Layer Perceptrons. These components are assembled into a crystalline structure so that each component provides "background context" for the other. Tests of the system show that it’s able to visually track objects across the challenging visual conditions often encountered by robots, such as backlighting, shadows, and variable viewing angles.

This research depicts a brain-inspired visual system that learns by watching videos. The model uses several core ideas from biological brains like Sparse Coding, a phenomenon observed by neuroscientists related to the brain's efficiency. The model also gains efficiency by including cutting-edge features adopted by deep learning researchers, such as weight sharing within a layer. Tests of the system show that it develops sensitivity to visual patterns similar to what is seen in real brains. The system was able to recognize and track objects with excellent accuracy.

Join us in Vegas for ISSA 2017 and stop by booth #917 to experience a live demo of EMMA - Brain's autonomous floor care solution.