Running local AI workloads on AMD MI60 GPUs and a dedicated GPU workstation (AI Workstation).

Hardware#

  • AMD MI60 - Datacenter GPU with 32GB HBM2
  • AI Workstation - GPU workstation for inference and training

Software Stack#

  • vLLM - High-performance LLM inference server
  • ComfyUI - Image generation workflows
  • ROCm - AMD’s GPU compute platform

Current Focus#

Optimizing inference performance and experimenting with local model deployment for various tasks.

Related Posts