how to run it in mac?
Dear Ferret-UI-Llama8b repository authors,
I am writing to inquire about running the model_UI code on a Mac computer in a CPU-only environment. I have reviewed the code and made some modifications to ensure it can run without CUDA support, but I would appreciate your guidance on the best approach.
Specifically, I have made the following changes to the code:
- Disabled CUDA support by setting
torch.cuda.is_available = lambda: Falseand setting theCUDA_VISIBLE_DEVICESandTORCH_DEVICEenvironment variables. - Set the
data_typeargument to use eithertorch.float16,torch.bfloat16, ortorch.float32depending on the user's preference, in order to leverage mixed precision on the CPU. - Modified the image preprocessing function to use a custom
image_process_functhat resizes the images without center cropping, as the original code assumes CUDA availability. - Ensured that any region masks are converted to the appropriate data type before being used in the model.
These changes should allow the model_UI code to run on a Mac in a CPU-only environment. However, I would appreciate if you could provide any additional guidance or considerations for running the code in this configuration. For example, are there any specific requirements or recommendations for the CPU hardware, or any other optimizations that could be made to improve the performance on a CPU-only system?
Thank you in advance for your assistance. I look forward to your response and to continuing to work with your excellent Ferret-UI-Llama8b project.
Best regards,
chenliangjing
Sorry for the late reply,
@chenliangjing
—this issue slipped past me. I usually reply faster if I’m tagged directly. :)
Sure! The hardware/specifications are similar to those of Llama 8b, which are well-documented (you can look them up). If you’re referring to the Gemma models, they’re also similar to the Gemma 2b specifications.
Hope this helps!