Overview
Configure Hugging Face as the LLM provider for KamoCRM's AI assistant. Access hundreds of open-source models via Hugging Face's Inference Endpoints — useful for teams that want to experiment with specialized models (domain-tuned, multilingual, custom fine-tunes) without hosting infrastructure.
Common use cases
Specialized open-source models
Route to domain-specialized models (legal, medical, finance-tuned) hosted on Hugging Face.
Custom fine-tuned model deployment
If you've fine-tuned a model on your data and deployed it to HF Inference Endpoints, point KamoCRM at it.
Model experimentation
Try emerging models (often appearing on HF before commercial providers) before deciding on a primary backend.
Setup
- 1Sign up at huggingface.co
- 2Deploy your chosen model to Inference Endpoints or use serverless inference
- 3Navigate to Settings → AI → Providers in KamoCRM
- 4Select Hugging Face as a provider
- 5Enter your HF API token and endpoint URL
- 6Test with a sample query
What data syncs
AI request/response routing
Token/compute usage (billed by Hugging Face)
Model metadata
Questions about Hugging Face
Ready to connect Hugging Face?
Start your free KamoCRM account and connect in minutes.