Effortless AI Prompting Completely Private & Offline


We don't even know how to spell  Munthly Subskripshun ☉ ‿ ⚆

Take control of your AI usage.

Before you subscribe to another AI subscription, investigate local and offline AI solutions like ShadowQuill and Ollama

Your computer has the power to free you from unnecessary proprietary AI fees.

Your Private

ShadowQuill brings state-of-the-art AI to your desktop or laptop without compromising your privacy or wallet ;)

100% Private & Offline

Your prompts and data never leave your computer. All processing happens locally, giving you complete control and confidentiality.

Powered by Google Gemma 3

Gemma 3 is Google's open-weight model. This means it runs completely through your computer's hardware, not the cloud, giving you high-performance AI locally, provided you have the sufficient specs.

Simple & Accessible

Runs on any modern computer with at least 8GB of RAM, making it easily accessible to users worldwide. Through a simple initial setup UI, ShadowQuill gets you connected to your local Ollama Gemma 3 instance in seconds.

Your Data (and Wallet) are Safe

ShadowQuill was founded on the principle of absolute privacy and accessible education.

ShadowQuill Privacy

A Truly Offline-First Experience

This project was built to provide a powerful AI tool that respects you. There are no servers, no cloud processing, and no data collection. When you combine two open source projects: ShadowQuill and Ollama + Gemma 3, your prompts and ideas never leave your computer.

Power Without the Paywall

We want to turn this into a learning experience. Most AI tools hide behind expensive subscriptions, but ShadowQuill exists to teach you the opposite: You don't need a monthly fee to access powerful tools. You can own the AI technology you use.

What you need to run effectively:

  • A computer with at least 8GB of RAM
  • Approximately 4GB of Storage

Local Models & Hardware

Choose the right Gemma 3 model for you and learn what it takes to run ShadowQuill offline and locally with Ollama

Hardware Recommendations

1B Parameter Model

Recommended RAM: 8 GB+

Ideal for basic tasks and older hardware. Runs well on most modern CPUs.

4B Parameter Model ShadowQuill favorite!

Recommended RAM: 16 GB+

A great balance of performance and resource usage. A dedicated GPU is recommended.

12B Parameter Model

Recommended RAM: 24 GB+

For more complex tasks. Requires a powerful CPU and a modern dedicated GPU.

27B Parameter Model

Recommended RAM: 48 GB+

The most powerful model. A high-end GPU (NVIDIA RTX) is strongly recommended. (Still cheaper than most yearly AI 'pro' plans...)

About the Gemma 3 Models

  • Built with the same research and technology as Google's powerful Gemini models.
  • The 4B, 12B, and 27B models are multimodal, capable of understanding both text and images.
  • Features a massive 128,000 token context window (for 4B+ models).
  • Trained with support for over 140 languages, making it a truly global model family.
  • Includes official Quantization-Aware Trained (QAT) versions for high accuracy.
  • Beyond its primary function as ShadowQuill's offline prompt builder, it also perfectly functions as an offline AI chat via Ollama.

You can see why it was chosen ԅ(≖‿≖ԅ)

Getting started with ShadowQuill ShadowQuill Logo

Step 1 — Install Ollama

Download and install Ollama for your system:


After installing and opening Ollama, choose and install your preferred Gemma 3 model.

For example, to download the Gemma 3 4B Model, run the following in CLI:

ollama pull gemma3:4b

... Or download Gemma 3 directly from the Ollama's AI chat interface


For a deeper dive into the ollama CLI and additional usage beyond these basics, check out the official Ollama open-source GitHub repository

Step 2 — Install ShadowQuill ShadowQuill Logo

ShadowQuill Logo

Windows

ShadowQuill Logo

macOS

Once Ollama is running locally and your chosen Gemma 3 model is installed, ShadowQuill will automatically detect it and allow you to generate prompts from natural languauge inputs offline and privately.

When will ShadowQuill be released?

Official release target is sometime in December 2025. Stay tuned!

If you would like to check current status, become a contributor or download the current pre-release/beta version of ShadowQuill, visit the open source GitHub repository!

ShadowQuill Logo GitHub Logo

Proudly Open Source

ShadowQuill is a passion project from AI enthusiasts, built for users who believe in local-first privacy. Check out the source code, report issues, or contribute to the project on GitHub.

View on GitHub

Slide in our very offline DMs... jk hehe

But seriously; have a question, suggestion, or just want to say hi? Get in contact!