• Seattle Skeptics on AI
Seattle Skeptics on AI

Tag: cost per token

How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving
How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving

Tamara Weed, Nov, 24 2025

Learn how to choose batch sizes for LLM serving to cut cost per token by up to 87%. Real-world examples, optimal batch sizes, GPU limits, and proven cost-saving techniques.

Categories:

Science & Research

Tags:

batch size LLM serving cost per token GPU utilization LLM optimization

Recent post

  • Pretraining Objectives in Generative AI: Masked Modeling, Next-Token Prediction, and Denoising
  • Pretraining Objectives in Generative AI: Masked Modeling, Next-Token Prediction, and Denoising
  • Agent-Oriented Large Language Models: Planning, Tools, and Autonomy Explained
  • Agent-Oriented Large Language Models: Planning, Tools, and Autonomy Explained
  • Long-Context Prompt Design: How to Fix the 'Lost in the Middle' Problem
  • Long-Context Prompt Design: How to Fix the 'Lost in the Middle' Problem
  • Emergent Capabilities in Generative AI: What Works and What Remains Unclear
  • Emergent Capabilities in Generative AI: What Works and What Remains Unclear
  • Security Risks in LLM Agents: Injection, Escalation, and Isolation
  • Security Risks in LLM Agents: Injection, Escalation, and Isolation

Categories

  • Science & Research
  • Enterprise Technology

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025

Tags

vibe coding generative AI large language models prompt engineering AI coding tools AI governance LLM security AI compliance data privacy AI development Large Language Models LLM optimization AI coding transformer models AI code security GitHub Copilot LLM deployment AI coding assistants prompt injection AI code vulnerabilities

© 2026. All rights reserved.