Hello y’all, i was using this guide to try and set up llama again on my machine, i was sure that i was following the instructions to the letter but when i get to the part where i need to run setup_cuda.py install i get this error
File "C:\Users\Mike\miniconda3\Lib\site-packages\torch\utils\cpp_extension.py", line 2419, in _join_cuda_home raise OSError('CUDA_HOME environment variable is not set. ' OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. (base) PS C:\Users\Mike\text-generation-webui\repositories\GPTQ-for-LLaMa>
i’m not a huge coder yet so i tried to use setx to set CUDA_HOME to a few different places but each time doing echo %CUDA_HOME
doesn’t come up with the address so i assume it failed, and i still can’t run setup_cuda.py
Anyone have any idea what i’m doing wrong?
Since you are using Windows, you can try setting the
CUDA_HOME
to point to your CUDA installation folder through the “Edit Environment Variables” window.However, this guide seems pretty convoluted. I would recommend using one of the many Llama models people have already compiled and shared in HuggingFace.
I think I have the one I downloaded back when you needed to get approved by meta to download it, however I was just looking for the guide to actually start the thing, since I’m so used to using a GUI, I guess I didn’t realize I was actually building the damn thing lol