Red Hat and Tesla engineers tackled a real production problem together.
Uncategorized
1
Posts
1
Posters
0
Views
-
Red Hat and Tesla engineers tackled a real production problem together.
3x output tokens/sec, 2x faster TTFT on Llama 3.1 70B with KServe + llm-d + vLLM. Fixes pushed upstream to KServe along the way.
This is what open source looks like.

Production-Grade LLM Inference at Scale with KServe, llm-d, and vLLM | llm-d
How migrating from a simple vLLM deployment to a robust MLOps platform utilizing KServe, llm-d's intelligent routing, and vLLM solved significant scaling and operational challenges in LLM deployment through deep customization and prefix-cache aware routing to maximize GPU utilization.
llm-d (llm-d.ai)
#RedHat #Tesla #RedHatAI #vLLM #Pytorch #Kubernetes #OpenShift #KServe #llmd #Llama #OpenSource
-
System shared this topic