System Requirements
Python Version
Python 3.10 or higher
GPU (Recommended)
NVIDIA GPU with CUDA 11.8+
VRAM Requirements
Minimum 8GB, Recommended 16GB+
Operating System
Linux, macOS, or Windows
Installation Methods
Install via pip (Recommended)
The simplest way to install HyperGen:Install from Source
For the latest development version or to contribute:1
Clone the repository
2
Install in editable mode
The
-e flag installs in editable mode, so changes to the source code are reflected immediately.Install with uv (Faster)
For faster installation using uv:GPU Setup
CUDA Installation
HyperGen requires PyTorch with CUDA support for GPU acceleration.- CUDA 12.1
- CUDA 11.8
- CPU Only
Verify GPU Setup
Check that PyTorch can access your GPU:Dependencies
HyperGen automatically installs the following core dependencies:| Package | Version | Purpose |
|---|---|---|
torch | >=2.0.0 | Deep learning framework |
diffusers | >=0.30.0 | Diffusion model pipelines |
transformers | >=4.40.0 | Text encoders and tokenizers |
peft | >=0.8.0 | LoRA and parameter-efficient fine-tuning |
accelerate | >=0.30.0 | Training acceleration |
safetensors | >=0.4.0 | Safe model serialization |
pillow | >=10.0.0 | Image processing |
numpy | >=1.24.0 | Numerical operations |
fastapi | >=0.104.0 | API server framework |
uvicorn | >=0.24.0 | ASGI server |
pydantic | >=2.0.0 | Data validation |
Optional Dependencies
Flash Attention (Recommended)
For faster training and inference with attention optimization:Flash Attention requires CUDA and may take several minutes to compile on first installation.
xFormers (Alternative Optimization)
Alternative memory-efficient attention implementation:DeepSpeed (Multi-GPU Training)
For distributed training across multiple GPUs:Video Models Support
For CogVideoX and other video diffusion models:All Optional Dependencies
Install everything at once:Development Installation
For contributing to HyperGen:pytest- Testing frameworkpytest-cov- Coverage reportingruff- Linting and formatting
Troubleshooting
CUDA Out of Memory
If you encounter CUDA out of memory errors:-
Reduce batch size:
- Enable gradient checkpointing (coming in Phase 2)
-
Use a smaller model or lower precision:
Installation Fails on Flash Attention
Flash Attention requires specific CUDA versions and compilation:- Ensure you have CUDA 11.8 or 12.1 installed
-
Install build tools:
-
Skip Flash Attention for now:
Import Errors
If you get import errors after installation:-
Verify installation:
-
Reinstall in a clean environment:
macOS Metal (MPS) Support
For Apple Silicon Macs:MPS support is experimental. Some features may not work as expected.