Skip to content
AI, Bit by Bit — Byte-Sized Insights for Professionals, Creatives, Builders & Enthusiasts
AightBits
Search
AightBits Blog
Bio
Bits of Coding
Bits of Prompt Engineering
Contact
Guides
Installing vLLM as as Service on Ubuntu Server 24.04.3 LTS
Production Hardening for vLLM
Securing vLLM with Caddy Reverse Proxy
Links
Safety & Ethics
Category:
Inference
Introducing AightBot, a WordPress LLM Chatbot Plugin
Building a Budget LLM Inference Box in Late 2025
How Iterative Prompting Can Elevate Lightweight LLMs to the Heavyweight Class
LLM Limitations, Weak Points & Blind Spots: Math
Introduction to Tree of Thought (ToT) Prompting
Pattern Priming in Prompting: How to Shape LLM Output with Statistical Cues
How Reasoning Models Like DeepSeek-R1 Work
Understanding Sampler Settings in AI Text Generation
Smart Assistants and the Evolution of Rules-Based AI to Transformer-based LLMs
User-Induced Bias in Language Models: A Simple Demonstration
Next Page
Subscribe
Subscribed
AightBits
Sign me up
Already have a WordPress.com account?
Log in now.
AightBits
Subscribe
Subscribed
Sign up
Log in
Report this content
View site in Reader
Manage subscriptions
Collapse this bar