Skip to content

llm-tale/llm_tale

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-TALE: LLM-Guided Task- and Affordance-Level Exploration in Reinforcement Learning

arXiv Project Page ICRA 2026 Python 3.10

Official code for the paper:

LLM-Guided Task- and Affordance-Level Exploration in Reinforcement Learning
Jelle Luijkx, Runyu Ma, Zlatan Ajanović, Jens Kober — ICRA 2026
[arXiv] [Project page]

Overview

LLM-TALE is a framework that uses an LLM's planning to directly steer RL exploration at two levels: a task level (which subgoal to pursue) and an affordance level (how to interact with the relevant object). Unlike prior approaches that assume optimal LLM-generated plans or rewards, LLM-TALE corrects suboptimality online and explores multimodal affordance-level plans without human supervision, improving sample efficiency and success rates on robotic manipulation benchmarks and transferring zero-shot to a real robot.

Installation

Prerequisites: install uv

We advise using uv to install the llm-tale package. Please install uv by following the installation instructions if you don't have it yet.

Prerequisites: set up RLBench (not required for ManiSkill tasks)

Install CoppeliaSim:

# set env variables
export COPPELIASIM_ROOT=${HOME}/CoppeliaSim
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT
export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT

wget https://downloads.coppeliarobotics.com/V4_1_0/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz
mkdir -p $COPPELIASIM_ROOT && tar -xf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz -C $COPPELIASIM_ROOT --strip-components 1
rm -rf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz

Make sure the environment variables above are set whenever you run RLBench experiments, and verify you can run RLBench headless by following these instructions.

Install llm-tale

Clone the repository:

git clone git@github.com:llm-tale/llm_tale.git
cd llm_tale

Create and activate the virtual environment:

uv venv --python 3.10
source .venv/bin/activate

Install the package:

uv pip install -e .

Verify the installation

pytest tests/test_tasks.py

Tasks

ManiSkill (PickCube, StackCube, PegInsert):

PickCube StackCube PegInsert

RLBench (TakeLid, OpenDrawer, PutBox):

TakeLid OpenDrawer PutBox

Additional videos, including real-robot rollouts, are available on the project page.

Reproducing the results

Run the LLM-BC baseline:

bash scripts/run_llm_bc.sh

Run LLM-TALE:

bash scripts/run_llm_tale.sh

LLM code generation

To generate or inspect the code produced by the LLM, see the code_generation notebook under notebooks/.

Citation

If you find this work useful, please consider citing:

@inproceedings{luijkx2026llmtale,
  title     = {{LLM}-Guided Task- and Affordance-Level Exploration in Reinforcement Learning},
  author    = {Luijkx, Jelle and Ma, Runyu and Ajanovi{\'c}, Zlatan and Kober, Jens},
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
  year      = {2026}
}

Acknowledgements

For the prompt structure, we took inspiration from DROC. Our TD3 and PPO agents are based on the SKRL examples.

About

Official code for LLM-TALE (ICRA 2026): LLM-Guided Task- and Affordance-Level Exploration in Reinforcement Learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors