policy
Gemma-Alignment-Paradigms
sanjay-ks23 · PyTorch
or hover any field below to flag it
Overview
Name
Gemma-Alignment-Paradigms
Author
sanjay-ks23
Framework
PyTorch
License
MIT
Skill type
other
Evidence level
untested
Task description
A comparative study on aligning Google’s open source Gemma models using two paradigms: (1) RL-only fine-tuning using GRPO/PPO/DPO with a custom reward model, (2) A SFT+PEFT using LoRA/QLoRA based finetuning and (3) a staged SFT+PEFT (LoRA/QLoRA) pipeline followed by reinforcement Learning.
Spaces
Action space
other · 0-dim · 0Hz
Observation space
- type: other
Links
HuggingFace repo
null
Paper (arXiv)
null
Compatible robots
20anybotics-anymal-cnot in seedalohanot in seedgoogle-barkour-vbnot in seedboston-dynamics-spotnot in seedfranka-fr3not in seedgoogle-barkour-v0not in seedagilex-pipernot in seedberkeley-humanoidnot in seedbitcraze-crazyflie-2not in seedanybotics-anymal-bnot in seedagility-cassienot in seedarx-l5not in seedbooster-t1not in seedfranka-emika-pandanot in seedfranka-fr3-v2not in seeddynamixel-2rnot in seedflexiv-rizon4not in seedassetsnot in seedapptronik-apollonot in seedfourier-n1not in seed
Compatible environments
0No environments list Gemma-Alignment-Paradigms yet.
Datasets that reference this policy
0No datasets reference Gemma-Alignment-Paradigms yet.