• Seattle Skeptics on AI
Seattle Skeptics on AI

Tag: GPU utilization

How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving
How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving

Tamara Weed, Nov, 24 2025

Learn how to choose batch sizes for LLM serving to cut cost per token by up to 87%. Real-world examples, optimal batch sizes, GPU limits, and proven cost-saving techniques.

Categories:

Science & Research

Tags:

batch size LLM serving cost per token GPU utilization LLM optimization

Recent post

  • State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah
  • State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah
  • ROI Modeling for Vibe Coding: How AI-Powered Development Cuts Costs, Speeds Up Delivery, and Boosts Quality
  • ROI Modeling for Vibe Coding: How AI-Powered Development Cuts Costs, Speeds Up Delivery, and Boosts Quality
  • Shadow AI Remediation: How to Bring Unapproved AI Tools into Compliance
  • Shadow AI Remediation: How to Bring Unapproved AI Tools into Compliance
  • Multi-Agent Systems with LLMs: How Specialized AI Agents Collaborate to Solve Complex Problems
  • Multi-Agent Systems with LLMs: How Specialized AI Agents Collaborate to Solve Complex Problems
  • Budgeting for Generative AI Programs: How to Plan Costs and Measure Real Value
  • Budgeting for Generative AI Programs: How to Plan Costs and Measure Real Value

Categories

  • Science & Research

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025

Tags

vibe coding large language models generative AI AI coding tools LLM security AI governance prompt engineering AI coding AI compliance transformer models AI agents AI code security AI implementation GitHub Copilot data privacy AI development LLM architecture GPU optimization AI in healthcare Parapsychological Association

© 2026. All rights reserved.