Papers
arxiv:2303.14347

Vision-based Vineyard Navigation Solution with Automatic Annotation

Published on Mar 25, 2023
Authors:
,
,
,
,

Abstract

A vision-based autonomous navigation framework for agricultural robots in vineyards uses a learning-based method to estimate path traversibility and develop a navigation system for row tracking and switching, demonstrated through field trials.

AI-generated summary

Autonomous navigation is the key to achieving the full automation of agricultural research and production management (e.g., disease management and yield prediction) using agricultural robots. In this paper, we introduced a vision-based autonomous navigation framework for agriculture robots in trellised cropping systems such as vineyards. To achieve this, we proposed a novel learning-based method to estimate the path traversibility heatmap directly from an RGB-D image and subsequently convert the heatmap to a preferred traversal path. An automatic annotation pipeline was developed to form a training dataset by projecting RTK GPS paths collected during the first setup in a vineyard in corresponding RGB-D images as ground-truth path annotations, allowing a fast model training and fine-tuning without costly human annotation. The trained path detection model was used to develop a full navigation framework consisting of row tracking and row switching modules, enabling a robot to traverse within a crop row and transit between crop rows to cover an entire vineyard autonomously. Extensive field trials were conducted in three different vineyards to demonstrate that the developed path detection model and navigation framework provided a cost-effective, accurate, and robust autonomous navigation solution in the vineyard and could be generalized to unseen vineyards with stable performance.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2303.14347 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2303.14347 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2303.14347 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.