LISA: Train 32B-120B language models on limited RAM (8GB). Layer-by-layer processing + LoRA adapters = 97% memory reduction.
pytorch lora low-memory distributed-training fine-tuning privacy-preserving federated-learning llm cpu-only layer-wise-training
-
Updated
Apr 3, 2026 - Python