policy

RLHF

sanuatmasai · PyTorch

or hover any field below to flag it

Overview

Name
RLHF
Author
sanuatmasai
Framework
PyTorch
License
unknown
Skill type
other
Evidence level
untested
Task description
A reinforcement learning system that trains language models to refuse harmful requests (hacking, malware) while remaining helpful for benign queries. Uses PPO with a hybrid reward system combining rule-based heuristics and optional reward model scoring.

Spaces

Action space
other · 0-dim · 0Hz
Observation space
  • type: other

Links

HuggingFace repo
null
Paper (arXiv)
null

Compatible robots

3+17 mentioned but not in catalog yet

Compatible environments

0

No environments list RLHF yet.

Datasets that reference this policy

0

No datasets reference RLHF yet.