• Seattle Skeptics on AI
Seattle Skeptics on AI

Tag: batch size

How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving
How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving

Tamara Weed, Nov, 24 2025

Learn how to choose batch sizes for LLM serving to cut cost per token by up to 87%. Real-world examples, optimal batch sizes, GPU limits, and proven cost-saving techniques.

Categories:

Science & Research

Tags:

batch size LLM serving cost per token GPU utilization LLM optimization

Recent post

  • Prompt Chaining vs Agentic Planning: Which LLM Pattern Fits Your Task?
  • Prompt Chaining vs Agentic Planning: Which LLM Pattern Fits Your Task?
  • Structured Reasoning Modules in Large Language Models: How Planning and Tool Use Boost Accuracy
  • Structured Reasoning Modules in Large Language Models: How Planning and Tool Use Boost Accuracy
  • Monitoring Bias Drift in Production LLMs: What You Need to Know in 2026
  • Monitoring Bias Drift in Production LLMs: What You Need to Know in 2026
  • Parameter Counts in Large Language Models: Why Size and Scale Matter for Capability
  • Parameter Counts in Large Language Models: Why Size and Scale Matter for Capability
  • Sparse Attention and Performer Variants: Efficient Transformer Ideas for LLMs
  • Sparse Attention and Performer Variants: Efficient Transformer Ideas for LLMs

Categories

  • Science & Research

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025

Tags

vibe coding large language models AI coding tools prompt engineering generative AI LLM security AI compliance AI governance AI coding transformer models AI code security GitHub Copilot AI development LLM deployment AI coding assistants prompt injection AI code vulnerabilities GPU utilization LLM optimization AI agents

© 2026. All rights reserved.