policy
pulse-rl
jam5991 · PyTorch
or hover any field below to flag it
Overview
Name
pulse-rl
Author
jam5991
Framework
PyTorch
License
MIT
Skill type
other
Evidence level
untested
Task description
An Offline Reinforcement Learning (RL) framework for dynamic fan engagement. It optimizes the timing and type of micro-betting incentives by modeling user fatigue and emotional state during live sports broadcasts to maximize long-term user value (LTV).
Spaces
Action space
other · 0-dim · 0Hz
Observation space
- type: other
Links
HuggingFace repo
null
Paper (arXiv)
null
Compatible robots
20anybotics-anymal-cnot in seedalohanot in seedgoogle-barkour-vbnot in seedboston-dynamics-spotnot in seedfranka-fr3not in seedgoogle-barkour-v0not in seedagilex-pipernot in seedberkeley-humanoidnot in seedbitcraze-crazyflie-2not in seedanybotics-anymal-bnot in seedagility-cassienot in seedarx-l5not in seedbooster-t1not in seedfranka-emika-pandanot in seedfranka-fr3-v2not in seeddynamixel-2rnot in seedflexiv-rizon4not in seedassetsnot in seedapptronik-apollonot in seedfourier-n1not in seed
Compatible environments
0No environments list pulse-rl yet.
Datasets that reference this policy
0No datasets reference pulse-rl yet.