feat: Jan can load large model with multiple gguf files #2898
Labels
P1: important
Important feature / fix
roadmap: Cortex
Cortex, Cortex llama cpp, core extensions
type: feature request
A new feature
Milestone
Problem
Jan is only support 1 gguf model file at a time
Success Criteria
We can help users to merge gguf files into 1 and load the model for them
Additional context
Approach
https://www.reddit.com/r/LocalLLaMA/comments/1cf6n18/how_to_use_merge_70b_split_model_ggufpart1of2/
The text was updated successfully, but these errors were encountered: