深度学习归一化方法详解,包括Batch Norm、Layer Norm、Instance Norm、Group Norm四种归一化技术的原理、实现方法和PyTorch代码示例,帮助理解不同归一化策略的应用场景。
Read more »

大语言模型量化技术综述,包括SmoothQuant、AWQ、LLM.int8、GPTQ、ZeroQuant、LUT-GEMM、SparseGPT等先进量化方法,以及weight-only量化在推理优化中的应用。
Read more »

NVIDIA GPU sparse computing technology overview, including efficient GPU kernel implementations for N:M sparse weights, Apex N:M sparse support, structured sparsity optimization on Tensor Cores, and related papers and open source project resources.
Read more »

Stable Diffusion推理优化技术详解,包括Flash Attention、Norm融合、混合Layout计算、推理显存优化等核心技术,实现512×512图像0.76秒生成,性能超越TensorRT 7.9%。
Read more »

异步拷贝概念

  • 从全局内存加载数据到共享内存 [global mem] -> [L2 cache] -> [L1 cache] -> [register] -> [shared mem]
  • cp.copy.cg [global mem] -> [L2 cache] -> [shared mem]
  • cp.copy.ca [global mem] -> [L2 cache] -> [L1 cache] -> [shared mem]

不同架构下内存拷贝

Ampere(SM80)

  • cp.async
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    cp.async.ca.shared{::cta}.global{.level::cache_hint}{.level::prefetch_size}
    [dst], [src], cp-size{, src-size}{, cache-policy} ;
    cp.async.cg.shared{::cta}.global{.level::cache_hint}{.level::prefetch_size}
    [dst], [src], 16{, src-size}{, cache-policy} ;
    cp.async.ca.shared{::cta}.global{.level::cache_hint}{.level::prefetch_size}
    [dst], [src], cp-size{, ignore-src}{, cache-policy} ;
    cp.async.cg.shared{::cta}.global{.level::cache_hint}{.level::prefetch_size}
    [dst], [src], 16{, ignore-src}{, cache-policy} ;

    .level::cache_hint = { .L2::cache_hint }
    .level::prefetch_size = { .L2::64B, .L2::128B, .L2::256B }
    cp-size = { 4, 8, 16 }

    TS const* gmem_ptr = &gmem_src;
    uint32_t smem_int_ptr = cast_smem_ptr_to_uint(&smem_dst);
    asm volatile("cp.async.ca.shared.global.L2::128B [%0], [%1], %2;\n"
    :: "r"(smem_int_ptr),
    "l"(gmem_ptr),
    "n"(sizeof(TS)));
  • 同步机制
    同步机制有两种:Async Group; mbarrier
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    // Establishes an ordering w.r.t previously issued cp.async instructions. Does not block.
    cp_async_fence()
    {
    #if defined(CUTE_ARCH_CP_ASYNC_SM80_ENABLED)
    asm volatile("cp.async.commit_group;\n" ::);
    #endif
    }

    // /// Blocks until all but N previous cp.async.commit_group operations have committed.
    template <int N>
    CUTE_HOST_DEVICE
    void
    cp_async_wait()
    {
    if constexpr (N == 0) {
    asm volatile("cp.async.wait_all;\n" ::);
    } else {
    asm volatile("cp.async.wait_group %0;\n" :: "n"(N));
    }
    }
    cp.async.wait_all = cp.async.commit_group + cp.async.wait_group 0

async group是per thread的。一个async group内的异步操作完成顺序是无序的,async group之间完成顺序取决于提交顺序。

https://www.zhihu.com/column/c_1669290006261825536

VPS搭建VPN代理服务完整指南,包括免费域名申请、域名解析配置、V2ray/Trojan服务器搭建、CDN流量中转、客户端配置等详细步骤和参考资源。
Read more »

PaddlePaddle模型导出指南,以PaddleClas为例介绍如何下载预训练模型、使用export_model.py脚本导出模型,以及模型部署相关技术。
Read more »

cuDNN优化设置指南,包括确定性算法配置、非确定性算法选择等性能优化策略,帮助提升深度学习模型在NVIDIA GPU上的运行效率。
Read more »
0%