← Back to Benchmarks
simmediumgraspingmetric · varies

Clebsch-Gordan Transformer: Fast and Global Equivariant Attention

Description

The global attention mechanism is one of the keys to the success of transformer architecture, but it incurs quadratic computational costs in relation to the number of tokens. On the other hand, equivariant models, which leverage the underlying geometric structures of problem instance, often achieve superior accuracy in physical, biochemical, computer vision, and robotic tasks, at the cost of additional compute requirements. As a result, existing equivariant transformers only support low-order eq

Source

http://arxiv.org/abs/2509.24093v1