The Intelligence Layer: EmbodiedAgents Roadmap¶
Q1 2026 (Happening Now): Data Collection & State Aggregation
Universal Data Collection: Add standardized “Collect Data” hooks to all components to facilitate dataset generation for future model fine-tuning.
Extend Compute Architecture Support: Add deployment harness for auto-compilation for NPUs for components that utilize local models (Vision, STT).
Natural Language Based Task Composition: Enhance the LLM component to handle composite tasks specified in natural language through
Actionlookup and chaining.State Aggregation and Memory: Add a specialized component that leverage LLMs and Vector DBs to synthesize, store and recall the Global Robot State (from Sugarcoat’s Roadmap) into semantically meaningful representations for use by the agent.
Q2 & Q3 2026: Simulation Extension, Multi Agent Orchestration & Memory Architecture
Isaac Sim Extension: Launching a dedicated extension for NVIDIA Isaac Sim to allow developers to build, test, and validate EmbodiedAgents logic in high-fidelity virtual worlds.
Structured Decomposition and Verification: Add structured decomposition features to LLM/VLM output post-processing for verifiable reasoning traces and formal verification guarentees.
New Memory Primitive: Add new long term memory primitive which goes beyond storing semantic vectors (inefficient) and leverage learned graph structures for heirarchical spatio-temporal organization. There is currently no such abstraction out there and this would become significant for long running tasks in more generalized environments.
Collaborative Multi-Agent Discourse: Enabling multiple robots to share mission goals, exchange semantic memories, and coordinate complex multi-agent tasks through new communication protocols.