dataset
hh-rlhf
Anthropic
or hover any field below to flag it
Overview
Name
hh-rlhf
Source
Anthropic
Episodes
0
Robot count
0
Format
json
Description
Dataset Card for HH-RLHF
Dataset Summary
This repository provides access to two different kinds of data:
Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. These data are meant to train preference (or reward) models for subsequent RLHF training. These data are not meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/hh-rlhf.
Robots used
null
Links
HuggingFace dataset