Skip to content

FAQ

No. Billy runs entirely offline using your local Ollama server. The only time it needs internet is to download models (/pull) or if you configure a remote backend.

For coding tasks:

  • codellama — Meta’s code-focused model
  • deepseek-coder — Excellent for code completion
  • mistral — Great all-rounder, good for chat + code

For general chat:

  • llama3.2 — Fast 3B model, good for most tasks
  • llama3.1 — Larger, more capable

How is this different from GitHub Copilot?

Section titled “How is this different from GitHub Copilot?”
Billy.shGitHub Copilot
CostFree (open source) or one-time fee$10/month
DataStays on your machineSent to GitHub/OpenAI
Works offline
Model choiceAny Ollama modelFixed models
IDE integrationComing soonBuilt-in

Not yet, but Groq and custom HTTP backends are on the roadmap.

Billy is alpha. It works, but expect rough edges.

Everything is in ~/.localai/:

  • config.toml — your settings
  • history.db — conversations and memories (SQLite)