Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds

Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds

MLNLP(Machine Learning Algorithms and Natural Language Processing) community is a well-known natural language processing community both domestically and internationally, covering NLP master’s and doctoral students, university professors, and corporate researchers.
The Vision of the Communityis to promote communication and progress between the academic and industrial circles of natural language processing and machine learning, as well as enthusiasts, especially for beginners.
Reprinted from | Machine Heart
The recently released text-to-image model Stable Diffusion by institutions such as the University of Munich has become a sensation, generating images that are truly cinematic:
Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds
Evolution of Biology
Stable Diffusion can run on consumer-grade GPUs with 10 GB VRAM and generates 512×512 pixel images in seconds, without any preprocessing or postprocessing.
Most importantly, Stable Diffusion is open-source, allowing anyone to run and modify it.
Moreover, researchers have combined it with a Web UI, creating a painting tool that allows you to operate without any formal UI knowledge. There’s no need to manually input parameters; you can simply adjust the sliders:
Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds
Arrange a guard outside the castle and let a mounted warrior charge towards the castle.
You might want to experience the creative process brought by Stable Diffusion, but are hindered by limited resources? Don’t worry, you can run Stable Diffusion in the cloud, and for those with the capability, you can run it locally as well.
If you want to run it locally, it can be a bit complicated. For instance, working on an M1 Mac’s GPU can be tricky. Here, this article will provide a simple guide on how to achieve that.

1

Implementation Process

First, you need a Mac with M1 or M2 chip; secondly, 16GB RAM is recommended; if you have 8GB RAM, the performance will be very slow; finally, you need macOS 12.3 or higher.
Once the prerequisites are prepared, the next step is to set up Python, version 3.10. If you are unsure about your Python version, you can run python -V to check:
$ python3 -V                                                                                       !11338Python 3.10.6
If your Python is 3.10 or higher, you can skip the next step. Otherwise, you need to install Python 3.10, and the easiest way is to use Homebrew. Here’s how to install it:
brew updatebrew install python
Clone the Repository and Install Dependencies
After Python is installed, the next step is to install Stable Diffusion:
git clone -b apple-silicon-mps-support https://github.com/bfirsh/stable-diffusion.gitcd stable-diffusionmkdir -p models/ldm/stable-diffusion-v1/
Here, you need to set up a virtual environment to install dependencies:
python3 -m pip install virtualenvpython3 -m virtualenv venv
Then activate the virtual environment:
source venv/bin/activate
After activation, install the dependencies:
pip install -r requirements.txt
If you see errors like “Failed building wheel for onnx”, you may also need to install these packages:
brew install Cmake protobuf rust
Download Weights
Go to the Hugging Face repository, read the license, and click “Access repository”. On that page, download sd-v1-4.ckpt (~4 GB) and save it in the models/ldm/stable-diffusion-v1/model.ckpt directory.
Hugging Face repository link: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
Once everything is ready, Stable Diffusion can be run:
python scripts/txt2img.py \  --prompt "a red juicy apple floating in outer space, like a planet" \  --n_samples 1 --n_iter 1 --plms
The output will be saved in the outputs/txt2img-samples/ directory, looking like this:
Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds
The whole process takes about 15 seconds to generate a 512×512 image:
Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds
Some links:
Run Stable Diffusion in the cloud: https://replicate.com/blog/run-stable-diffusion-with-an-api
Stable Diffusion GitHub: https://github.com/magnusviri/stable-diffusion
This article’s reference link: https://replicate.com/blog/run-stable-diffusion-on-m1-mac
Technical Group Invitation

Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds

△ Long press to add assistant

Scan the QR code to add assistant WeChat

Please note: Name-School/Company-Research Direction
(e.g., Xiao Zhang-Harbin Institute of Technology-Dialogue System)
to apply to join technical groups such as Natural Language Processing/Pytorch

About Us

MLNLP Community is a grassroots academic community built by scholars in natural language processing from around the world. It has developed into a well-known natural language processing community, including brands such as 10,000-Person Top Conference Group, AI Selection, MLNLP Talent Exchange, and AI Academic Exchange, aimed at promoting progress between the academic and industrial circles of machine learning and natural language processing.
The community provides an open communication platform for related practitioners in terms of further education, employment, and research. Everyone is welcome to follow and join us.

Running Stable Diffusion on Apple M1 Chip in Just 15 Seconds

Leave a Comment