Replies: 1 comment
-
|
Bump |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
GPUs:
I'm currently using ROCm on WSL2 with Ubuntu 22.04. ROCm detects both gpus, and i was able to validate this with pytorch as well
Issue: Llama-server runs fine on my 7900xtx which is my primary gpu, but fails to run on the 9700 AI Pro when the model is being loaded to vram with the following error:
Example Scenario:
Here's what i've tried so far but havent had much luck resolving the issue:
Any help in resolving this would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions