• Seattle Skeptics on AI
Seattle Skeptics on AI

Tag: LLM serving

How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving
How to Choose Batch Sizes to Minimize Cost per Token in LLM Serving

Tamara Weed, Nov, 24 2025

Learn how to choose batch sizes for LLM serving to cut cost per token by up to 87%. Real-world examples, optimal batch sizes, GPU limits, and proven cost-saving techniques.

Categories:

Science & Research

Tags:

batch size LLM serving cost per token GPU utilization LLM optimization

Recent post

  • HR Automation with Generative AI: Streamline Job Descriptions, Interviews, and Onboarding
  • HR Automation with Generative AI: Streamline Job Descriptions, Interviews, and Onboarding
  • Beyond CRUD: Vibe Coding Complex Distributed Systems
  • Beyond CRUD: Vibe Coding Complex Distributed Systems
  • How Large Language Models Learn: Self-Supervised Training at Internet Scale
  • How Large Language Models Learn: Self-Supervised Training at Internet Scale
  • Vibe Coding for Knowledge Workers: Tools That Save Hours Every Week
  • Vibe Coding for Knowledge Workers: Tools That Save Hours Every Week
  • Supply Chain Optimization with Generative AI: Demand Forecast Narratives and Exceptions
  • Supply Chain Optimization with Generative AI: Demand Forecast Narratives and Exceptions

Categories

  • Science & Research

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025

Tags

vibe coding generative AI AI coding tools LLM security prompt engineering AI coding AI compliance large language models transformer models AI implementation AI governance Parapsychological Association psi research paranormal studies psychic phenomena parapsychology no-code apps knowledge worker productivity AI app builder anti-pattern prompts

© 2025. All rights reserved.