policy
CryptoOBPretraining
qhawkins · PyTorch
or hover any field below to flag it
Overview
Name
CryptoOBPretraining
Author
qhawkins
Framework
PyTorch
License
unknown
Skill type
other
Evidence level
untested
Task description
A high-performance framework for pre-training transformer models on cryptocurrency order book data using NVIDIA's Transformer Cores with FP8 precision. This project implements an end-to-end pipeline from raw market data processing to distributed transformer training. The pre-trained transformer serv
Spaces
Action space
other · 0-dim · 0Hz
Observation space
- type: other
Links
HuggingFace repo
null
Paper (arXiv)
null
Compatible robots
3+17 mentioned but not in catalog yetCompatible environments
0No environments list CryptoOBPretraining yet.
Datasets that reference this policy
0No datasets reference CryptoOBPretraining yet.