policy

Gemma-Alignment-Paradigms

sanjay-ks23 · PyTorch

or hover any field below to flag it

Overview

Name
Gemma-Alignment-Paradigms
Author
sanjay-ks23
Framework
PyTorch
License
MIT
Skill type
other
Evidence level
untested
Task description
A comparative study on aligning Google’s open source Gemma models using two paradigms: (1) RL-only fine-tuning using GRPO/PPO/DPO with a custom reward model, (2) A SFT+PEFT using LoRA/QLoRA based finetuning and (3) a staged SFT+PEFT (LoRA/QLoRA) pipeline followed by reinforcement Learning.

Spaces

Action space
other · 0-dim · 0Hz
Observation space
  • type: other

Links

HuggingFace repo
null
Paper (arXiv)
null

Compatible robots

20

Compatible environments

0

No environments list Gemma-Alignment-Paradigms yet.

Datasets that reference this policy

0

No datasets reference Gemma-Alignment-Paradigms yet.