• Seattle Skeptics on AI
Seattle Skeptics on AI

Tag: LLM size

Parameter Counts in Large Language Models: Why Size and Scale Matter for Capability
Parameter Counts in Large Language Models: Why Size and Scale Matter for Capability

Tamara Weed, Dec, 20 2025

Parameter count in large language models determines their reasoning power, knowledge retention, and task performance. Bigger isn't always better-architecture, quantization, and efficiency matter just as much as raw size.

Categories:

Science & Research

Tags:

large language models parameter count LLM size model scaling AI capabilities

Recent post

  • Fine-Tuning for Faithfulness in Generative AI: Supervised vs. Preference Methods to Reduce Hallucinations
  • Fine-Tuning for Faithfulness in Generative AI: Supervised vs. Preference Methods to Reduce Hallucinations
  • Secure Prompting for Vibe Coding: How to Ask for Safer Implementations
  • Secure Prompting for Vibe Coding: How to Ask for Safer Implementations
  • Navigating the Generative AI Landscape: Practical Strategies for Leaders
  • Navigating the Generative AI Landscape: Practical Strategies for Leaders
  • SLAs and Support: What Enterprises Really Need from LLM Providers in 2025
  • SLAs and Support: What Enterprises Really Need from LLM Providers in 2025
  • Ethical Review Boards for Generative AI Projects: How They Work and What They Decide
  • Ethical Review Boards for Generative AI Projects: How They Work and What They Decide

Categories

  • Science & Research

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025

Tags

vibe coding generative AI large language models AI compliance AI governance AI coding tools LLM security prompt engineering AI coding transformer models AI implementation Parapsychological Association psi research paranormal studies psychic phenomena parapsychology no-code apps knowledge worker productivity AI app builder anti-pattern prompts

© 2025. All rights reserved.