policy

hs25-rl-ppo

oberpierre · PyTorch

or hover any field below to flag it

Overview

Name
hs25-rl-ppo
Author
oberpierre
Framework
PyTorch
License
unknown
Skill type
other
Evidence level
untested
Task description
A Reinforcement Learning framework using Proximal Policy Optimization (PPO) to train Large Language Models on the Atari Breakout environment via text observations

Spaces

Action space
other · 0-dim · 0Hz
Observation space
  • type: other

Links

HuggingFace repo
null
Paper (arXiv)
null

Compatible robots

3+17 mentioned but not in catalog yet

Compatible environments

0

No environments list hs25-rl-ppo yet.

Datasets that reference this policy

0

No datasets reference hs25-rl-ppo yet.