<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Robot Intelligence on EAI² — Embodied AI Intelligence</title>
    <link>https://hub.eai2.cloud/categories/robot-intelligence/</link>
    <description>Recent content in Robot Intelligence on EAI² — Embodied AI Intelligence</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Thu, 16 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://hub.eai2.cloud/categories/robot-intelligence/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Sim-to-Real Transfer in 2026: Why Your Robot Policy Breaks in the Real World (And How to Fix It)</title>
      <link>https://hub.eai2.cloud/posts/sim-to-real-transfer-guide/</link>
      <pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://hub.eai2.cloud/posts/sim-to-real-transfer-guide/</guid>
      <description>A practical guide to bridging the sim-to-real gap — why policies trained in simulation fail on real robots, and the proven techniques to fix it.</description>
    </item>
    <item>
      <title>VLA Models Compared: GR00T N1 vs pi0 vs OpenVLA in 2026</title>
      <link>https://hub.eai2.cloud/posts/vla-models-compared-2026/</link>
      <pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://hub.eai2.cloud/posts/vla-models-compared-2026/</guid>
      <description>A detailed comparison of the three leading Vision-Language-Action foundation models for robotics in 2026 — NVIDIA GR00T N1, Physical Intelligence pi0, and the open-source OpenVLA.</description>
    </item>
    <item>
      <title>Neurosymbolic VLA: Why Smaller Models Are Beating Giant Neural Networks at Robot Control</title>
      <link>https://hub.eai2.cloud/posts/neurosymbolic-vla-explained/</link>
      <pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://hub.eai2.cloud/posts/neurosymbolic-vla-explained/</guid>
      <description>A deep dive into the neurosymbolic VLA paradigm — where symbolic planning meets neural control, achieving 95% success rates with 2B parameters while 7B pure VLA models struggle at 34%.</description>
    </item>
    <item>
      <title>Diffusion Policy Explained: How Image Generation Tech Powers Robot Control</title>
      <link>https://hub.eai2.cloud/posts/diffusion-policy-explained/</link>
      <pubDate>Sat, 04 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://hub.eai2.cloud/posts/diffusion-policy-explained/</guid>
      <description>How diffusion models — the same technology behind Stable Diffusion and DALL-E — are being used to generate robot actions, and why they outperform traditional approaches.</description>
    </item>
  </channel>
</rss>
