policy
PolicyCliff
SafeAGI-01 · PyTorch
or hover any field below to flag it
Overview
Name
PolicyCliff
Author
SafeAGI-01
Framework
PyTorch
License
unknown
Skill type
navigation
Evidence level
untested
Task description
The Policy Cliff: A Theoretical Analysis of Reward-Policy Maps in Large Language Models. A rigorous mathematical framework analyzing the stability of the reward–policy mapping in RL-trained LLMs.
Spaces
Action space
other · 0-dim · 0Hz
Observation space
- type: other
Links
HuggingFace repo
null
Paper (arXiv)
null
Compatible robots
20anybotics-anymal-cnot in seedalohanot in seedgoogle-barkour-vbnot in seedboston-dynamics-spotnot in seedfranka-fr3not in seedgoogle-barkour-v0not in seedagilex-pipernot in seedberkeley-humanoidnot in seedbitcraze-crazyflie-2not in seedanybotics-anymal-bnot in seedagility-cassienot in seedarx-l5not in seedbooster-t1not in seedfranka-emika-pandanot in seedfranka-fr3-v2not in seeddynamixel-2rnot in seedflexiv-rizon4not in seedassetsnot in seedapptronik-apollonot in seedfourier-n1not in seed
Compatible environments
0No environments list PolicyCliff yet.
Datasets that reference this policy
0No datasets reference PolicyCliff yet.