policy
Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models
ShivamTarte · PyTorch
or hover any field below to flag it
Overview
Name
Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models
Author
ShivamTarte
Framework
PyTorch
License
unknown
Skill type
other
Evidence level
untested
Task description
Fine-tuned T5 Flan using TRL's PPO Trainer with a Roberta-based toxicity classifier as a reward model. A custom toxic prompt dataset guided the model to reduce harmful outputs. Training on GPU ensured efficiency, yielding a model that generates contextually relevant, low-toxicity text, combining RL
Spaces
Action space
other · 0-dim · 0Hz
Observation space
- type: other
Links
HuggingFace repo
null
Paper (arXiv)
null
Compatible robots
20anybotics-anymal-cnot in seedalohanot in seedgoogle-barkour-vbnot in seedboston-dynamics-spotnot in seedfranka-fr3not in seedgoogle-barkour-v0not in seedagilex-pipernot in seedberkeley-humanoidnot in seedbitcraze-crazyflie-2not in seedanybotics-anymal-bnot in seedagility-cassienot in seedarx-l5not in seedbooster-t1not in seedfranka-emika-pandanot in seedfranka-fr3-v2not in seeddynamixel-2rnot in seedflexiv-rizon4not in seedassetsnot in seedapptronik-apollonot in seedfourier-n1not in seed
Compatible environments
0No environments list Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models yet.
Datasets that reference this policy
0No datasets reference Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models yet.