Groundlight Unveils Open-Source ROS Package, Revolutionizing Embodied AI for Robotics

#AI--Groundlight, a pioneer in visual AI solutions, today announced the release of its open-source ROS package, accelerating the development of embodied AI in robotics. This innovative tool enables RO...

Autore: Business Wire

New tool empowers robot builders to easily integrate visual intelligence into robotic systems, combining machine learning efficiency with human-level reliability.

SEATTLE: #AI--Groundlight, a pioneer in visual AI solutions, today announced the release of its open-source ROS package, accelerating the development of embodied AI in robotics. This innovative tool enables ROS2 developers to effortlessly incorporate advanced computer vision capabilities into their projects. By merging machine learning with real-time human oversight, Groundlight's ROS package makes robots more perceptive and adaptable to real-world environments. The open source package is available here.

The classic computer vision (CV) process has been a bottleneck in developing robust robotic systems. The conventional method involves a time-consuming and labor-intensive process: gathering a comprehensive dataset, meticulously labeling each image, training a model, evaluating its performance, and then iteratively refining the dataset and model to handle edge cases. This can take months for each use case. After all this, when robots encounter situations outside their training set, they act unpredictably, even dangerously. Fixing that requires developers to redo much of the model development process.

Groundlight's open-source ROS package revolutionizes this approach by offering fast, customized edge models that can run locally, tailored to each robot's specific needs. Backed by automatic cloud training and 24/7 human oversight, robots will simply pause and await human guidance when faced with unfamiliar situations. This system enables real-time adaptation to unexpected scenarios. Human-verified responses typically arrive in under a minute, and are instantly trained back into the model and pushed down to the edge, improving safety and reliability, while dramatically speeding the development process.

"Our ROS package gives reliable eyes to embodied AI systems," said Leo Dirac, CTO of Groundlight. "Modern LLMs are too slow and expensive for direct robotic control, and often fail at simple visual tasks. We combine fast edge models with human oversight, enabling robots to see and understand their environment efficiently and reliably."

The Groundlight ROS package allows developers to ask binary questions about images in natural language. Queries are first processed by the current ML model, with high-confidence answers provided immediately. Low-confidence cases are escalated to human reviewers for real-time responses. This human-in-the-loop approach ensures reliability while continuously improving the underlying ML model without manual retraining cycles.

Robotics pioneer Sarah Osentoski, Ph.D., commented, "Groundlight's ROS package is a game changer for teams building robotic systems for unstructured environments. It makes human fallback simple, and automatically incorporates exception handling into ML models, improving efficiency over time."

This release marks a significant milestone in robotics and computer vision. By combining machine learning speed with human oversight reliability, Groundlight enables developers to create more intelligent, adaptive robotic systems easily. Whether for industrial automation, research, or innovative applications, this node paves the way for the next generation of visually-aware robots.

Groundlight is a leading innovator in visual AI solutions, dedicated to making computer vision more accessible and reliable for robotics and automation applications. By combining cutting-edge machine learning with human intelligence, Groundlight empowers developers to create smarter, more adaptable systems that thrive in real-world environments.

Fonte: Business Wire


Visualizza la versione completa sul sito

Informativa
Questo sito o gli strumenti terzi da questo utilizzati si avvalgono di cookie necessari al funzionamento ed utili alle finalità illustrate nella cookie policy. Se vuoi saperne di più o negare il consenso a tutti o ad alcuni cookie, consulta la cookie policy. Chiudendo questo banner, acconsenti all’uso dei cookie.