LocalLab provides comprehensive model management capabilities through the locallab models command group. This allows you to download, manage, and organize AI models locally before starting the server.
The model management system helps you:
- Download models locally for offline use and faster startup
- List cached models with detailed information
- Remove models to free up disk space
- Discover available models from registries and HuggingFace Hub
- Get detailed model information including system compatibility
- Clean up cache to remove orphaned files
# Discover available models
locallab models discover
# Download a model
locallab models download microsoft/DialoGPT-medium
# List your cached models
locallab models list
# Get detailed information about a model
locallab models info microsoft/DialoGPT-medium
# Remove a model to free space
locallab models remove microsoft/DialoGPT-mediumList all locally cached models with detailed information.
locallab models list [OPTIONS]Options:
--format [table|json]- Output format (default: table)--registry-only- Show only registry models--custom-only- Show only custom models
Examples:
# List all cached models in table format
locallab models list
# List only registry models
locallab models list --registry-only
# Export model list as JSON
locallab models list --format jsonOutput includes:
- Model ID and name
- Size on disk
- Cache status and date
- Model type (Registry/Custom)
- Brief description
Download a model locally for offline use and faster server startup.
locallab models download <model_id> [OPTIONS]Options:
--force- Force re-download even if model exists--no-cache-update- Skip updating cache metadata
Examples:
# Download a registry model
locallab models download microsoft/DialoGPT-medium
# Download a custom HuggingFace model
locallab models download huggingface/CodeBERTa-small-v1
# Force re-download an existing model
locallab models download microsoft/DialoGPT-medium --forceFeatures:
- Progress bar with download speed and ETA
- Automatic validation of downloaded files
- Integration with HuggingFace Hub authentication
- Graceful error handling and retry logic
- Cache metadata tracking
Remove locally cached models to free up disk space.
locallab models remove <model_id> [OPTIONS]Options:
--force- Skip confirmation prompt
Examples:
# Remove a model with confirmation
locallab models remove microsoft/DialoGPT-medium
# Remove without confirmation
locallab models remove microsoft/DialoGPT-medium --forceSafety features:
- Confirmation prompt by default
- Shows model size before removal
- Cleans up all associated files
- Updates cache metadata
Discover available models from the LocalLab registry and HuggingFace Hub.
locallab models discover [OPTIONS]Options:
--search <keywords>- Search models by keywords, tags, or description--limit <number>- Maximum number of models to show (default: 20)--format [table|json]- Output format (default: table)--registry-only- Show only LocalLab registry models--hub-only- Show only HuggingFace Hub models--sort [downloads|likes|recent]- Sort HuggingFace models by popularity or recency--tags <tags>- Filter by comma-separated tags (e.g., "conversational,chat")
Examples:
# Discover all available models (registry + HuggingFace Hub)
locallab models discover
# Search for specific models across all sources
locallab models discover --search "code generation"
# Search only in LocalLab registry
locallab models discover --search "phi" --registry-only
# Find models by tags
locallab models discover --tags "conversational,chat" --limit 10
# Get popular models sorted by downloads
locallab models discover --sort downloads --limit 15
# Export results as JSON for processing
locallab models discover --format json --limit 5Information shown:
- Model ID and name
- Model size and type (Registry/HuggingFace)
- Download count and popularity metrics
- Cache status (cached/available)
- Brief description
- Author information
Network Requirements:
- Registry models: Always available (offline)
- HuggingFace Hub models: Requires internet connection
- Graceful fallback when HuggingFace Hub is unavailable
Get detailed information about a specific model.
locallab models info <model_id>Examples:
# Get detailed info about a model
locallab models info microsoft/DialoGPT-mediumInformation includes:
- Model name and description
- Size and requirements
- Local cache status and location
- System compatibility check
- Available actions (download/remove)
- Fallback model information
Clean up orphaned cache files and free disk space.
locallab models cleanFeatures:
- Finds empty model directories
- Identifies temporary and lock files
- Shows what will be cleaned before removal
- Confirmation prompt for safety
- Reports space freed after cleanup
Models are cached in the standard HuggingFace Hub cache directory:
- Windows:
%USERPROFILE%\.cache\huggingface\hub - macOS/Linux:
~/.cache/huggingface/hub
You can override this location by setting the HF_HOME environment variable.
~/.cache/huggingface/hub/
├── models--microsoft--DialoGPT-medium/
│ ├── refs/
│ ├── snapshots/
│ └── blobs/
└── models--huggingface--CodeBERTa-small-v1/
├── refs/
├── snapshots/
└── blobs/
LocalLab maintains additional metadata in ~/.locallab/model_cache.json:
- Download timestamps
- Access counts
- Download methods
- Custom model information
HF_HOME- Override HuggingFace cache directoryHF_TOKEN- HuggingFace authentication token for private models
Downloaded models are automatically available when starting the LocalLab server:
# Download model first
locallab models download microsoft/DialoGPT-medium
# Start server - model loads faster from cache
locallab start --model microsoft/DialoGPT-mediumUse shell scripting for batch operations:
# Download multiple models
for model in "microsoft/DialoGPT-medium" "microsoft/DialoGPT-large"; do
locallab models download "$model"
done
# Clean up old models
locallab models list --format json | jq -r '.[] | select(.size > 1000000000) | .id' | \
xargs -I {} locallab models remove {} --forceDownload fails with authentication error:
# Set HuggingFace token
export HF_TOKEN="your_token_here"
locallab models download private/modelModel not found:
# Check if model exists in registry
locallab models discover --search "model_name"
# Try with full HuggingFace model ID
locallab models download organization/model-nameCache corruption:
# Clean up corrupted cache
locallab models clean
# Force re-download
locallab models download model_id --forceDisk space issues:
# Check cache size
locallab models list
# Remove large unused models
locallab models remove large_model_id
# Clean orphaned files
locallab models clean# Get help for any command
locallab models --help
locallab models download --help
locallab models list --help- Download models before first use for faster server startup
- Regularly clean cache to free up disk space
- Use registry models when possible for better support
- Check system compatibility before downloading large models
- Set HF_TOKEN for accessing private models
- Monitor disk usage with
locallab models list
For more information, see the CLI Reference or Configuration Guide.