Cross-platform FlashAttention-2 Triton implementation for Turing+ GPUs with custom configuration mode
-
Updated
Jan 12, 2026 - Python
Cross-platform FlashAttention-2 Triton implementation for Turing+ GPUs with custom configuration mode
FlashAttention for sliding window attention in Triton (fwd + bwd pass)
This repository contains multiple implementations of Flash Attention optimized with Triton kernels, showcasing progressive performance improvements through hardware-aware optimizations. The implementations range from basic block-wise processing to advanced techniques like FP8 quantization and prefetching
CUDA 12-first backend inference for Unsloth on Kaggle — Optimized for small GGUF models (1B-5B) on dual Tesla T4 GPUs (15GB each, SM 7.5)
This repo represents my Nano-GPT speedrun playground, which started coding along Let's reproduce GPT-2 (124M), then moved into further improvements.
HRM-sMoE LLM training toolkit.
easy naive flash attention without optimization base on origin paper
PyTorch implementation of YOLOv12 with Scaled Dot-Product Attention (SDPA) optimized by FlashAttention for fast and efficient object detection.
A 66M parameter decoder-only transformer language model implemented from scratch in PyTorch. Features a custom SentencePiece tokenizer, RoPE positional embeddings, SwiGLU feed-forward network, per-layer KV cache for efficient autoregressive inference, and a Svelte-based streaming chat interface.
200 lines Flash Attention (only forward pass) in CUDA.
16-step CUDA optimization of FlashAttention-2 achieving 99.2% of official performance on A100 — Ampere architecture
CUDA kernels for LLM inference: FlashAttention forward, Tensor Core GEMM, PyTorch bindings, and benchmarkable reference implementations.
White paper & reproducible benchmark suite for LLM inference optimization on AMD MI300X using ROCm 6.1
CUDA kernel optimization lab: GEMM, FlashAttention, quantization, and GPU performance learning.
ViT-L/16 inference optimization - 4-bit NF4 quantization, FlashAttention-2 vs SDPA benchmarking, 40.5% latency reduction
FlashAttention2 Analysis in Triton
An minimal CUDA implementation of FlashAttention v1 and v2
From-scratch CUDA implementation of memory-efficient transformer attention with up to 9.5x speedup over a naive baseline, deployed end-to-end to Raspberry Pi 4.
A high-performance kernel implementation of multi-head attention using Triton. Focused on minimizing memory overhead and maximizing throughput for large-scale transformer layers. Includes clean-tensor layouts, head-grouping optimisations, and ready-to-benchmark code you can plug into custom models.
A minimal, educational implementation of Ring Attention logic using custom OpenAI Triton kernels. Supports blockwise computation and online softmax merging.
Add a description, image, and links to the flashattention topic page so that developers can more easily learn about it.
To associate your repository with the flashattention topic, visit your repo's landing page and select "manage topics."