policy

Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models

ShivamTarte · PyTorch

or hover any field below to flag it

Overview

Name
Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models
Author
ShivamTarte
Framework
PyTorch
License
unknown
Skill type
other
Evidence level
untested
Task description
Fine-tuned T5 Flan using TRL's PPO Trainer with a Roberta-based toxicity classifier as a reward model. A custom toxic prompt dataset guided the model to reduce harmful outputs. Training on GPU ensured efficiency, yielding a model that generates contextually relevant, low-toxicity text, combining RL

Spaces

Action space
other · 0-dim · 0Hz
Observation space
  • type: other

Links

HuggingFace repo
null
Paper (arXiv)
null

Compatible robots

20

Compatible environments

0

No environments list Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models yet.

Datasets that reference this policy

0

No datasets reference Fine-Tuning-T5-Flan-with-Reinforcement-Learning-and-Toxicity-Aware-Reward-Models yet.