Visual-Language-Guided Task Planning for Horticultural Robots

Jose Cuaran*

Aditya Potnis

Naveen Kumar Uppalapati

University of Illinois at Urbana Champaign (UIUC)

In Submission





Abstract

Crop monitoring is essential for precision agriculture, but current systems lack high-level reasoning. We introduce a novel, modular framework that uses a Visual Language Model (VLM) to guide robotic task planning, interleaving input queries with action primitives. We contribute a comprehensive benchmark for short- and long-horizon crop monitoring tasks in monoculture and polyculture environments. Our main results show that VLMs perform robustly for short-horizon tasks (comparable to human success), but exhibit significant performance degradation in challenging long-horizon tasks. Critically, the system fails when relying on noisy semantic maps, demonstrating a key limitation in current VLM context grounding for sustained robotic operations. This work offers a deployable framework and critical insights into VLM capabilities and shortcomings for complex agricultural robotics.



Figure 1. Simulation environments used to evaluate the proposed system.

Figure 2.

Figure 3.

Figure 4.


Demo Video