#

llm

6 articles tagged with this topic.

WillWin: predicting five championships nightly with one local LLM
AI Engineering

WillWin: predicting five championships nightly with one local LLM

WillWin is a personal technical experiment — a single open-weights 32B model running on a local RTX 5090 reads public data sources and publishes nightly probability estimates for the 2026 World Cup, Eurovision, F1, Tour de France, and the Oscars. Not a prediction service, not betting advice.

I built a voice chat with two talking dogs
AI Engineering

I built a voice chat with two talking dogs

A real-time voice chat on thedoodlecast.com lets you speak to Rusty and Oreo — and hear them talk back in character. Here's how it works, the stack behind it, and how to apply for early access.

Introducing Bulb — Your AI Startup Idea Lab
AI Engineering

Introducing Bulb — Your AI Startup Idea Lab

Got a startup idea? Bulb puts it through the gauntlet: four AI evaluators with wildly different perspectives tear it apart, debate each other, then a final verdict tells you whether to build it or bin it. Free to use, no signup required.

How We Built the TheDoodleCast AI Chatbot
AI Engineering

How We Built the TheDoodleCast AI Chatbot

How we built a real-time AI chatbot for TheDoodleCast using Ollama, a local RTX 5090, Cloudflare Tunnel, and a custom streaming UI — without paying per token.