Optimizes FlashAttention algorithm through fused exponential and multiplication hardware operators (ExpMul), achieving 28.8% area reduction and 17.6% power consumption reduction in 28nm ASIC technology, significantly improving Transformer model efficiency.