Share your GPU with the Neurlap network and earn credits. Spend those credits to access any model — powered by a global community of providers.
The credit cycle
Run any open-source LLM on your hardware. Neurlap connects you to the network automatically.
Every inference request your GPU serves earns credits. Bigger models earn more.
Spend your earned credits to query models hosted by other providers across the network.
Zero dollars spent — only credits exchanged
Why Neurlap
Your GPU sits idle most of the day. With Neurlap, you pick a model from our catalog, download it in one click, and start serving real inference requests — every token earns you credits to use any model in the network.
Traditional approach
With Neurlap
Get started
Download the lightweight Neurlap client. Runs in your system tray on macOS, Windows, or Linux.
Browse the model catalog and download any GGUF model. The built-in engine handles everything.
Credits accrue as your GPU serves requests. Spend them on any model in the network.
Supports any GPU
Built for the community
Change one line of code. Your existing app just works.
Requests route to the nearest provider with the best latency.
Powered by real people sharing their GPUs. The more providers join, the faster and more reliable the network becomes.
The coordinator never stores prompts. Providers see payloads only during inference.
Earn per token generated. Spend per token consumed. No subscriptions.
Connect when you want, disconnect when you want. No penalty for going offline. Your node earns when it's on, and costs nothing when it's off.
Works everywhere
Swap one line and your entire stack runs on community GPUs. Compatible with every major AI framework.
FAQ
Ready to start?
Free for contributors. Premium plans coming soon.