-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PyTorch-XPU] NotImplementedError: No registered fallback function for aten::view #631
Comments
https://github.com/uniartisan/RWKV_Pytorch/blob/dev/train/train-test.py#L51 Supplement some information: In the code, I implemented automatic detection of the XPU device, so when reproducing, you can manually change it or leave the 'cpu' device as is. If you change the opset here to 16, it won't trigger the issue mentioned above. (Yes, I implemented different specific implementations for the same model). This should help better identify which specific operation is causing the problem. |
Thanks for reporting this. I will try to reproduce the issue and get back to you later. |
@uniartisan We have reproduced the error you met. Will get back to you after root causing. Thanks. |
@uniartisan This NotImplementedError has been root caused and we have fixed the bug in LayerNorm layer. It works for both opset=16 and opset=18 in training RWKV now. We will have a release including this fix soon. |
Thank you for your efforts. I will try the new release as soon as it is available. |
Hi @uniartisan, this LN issue has been fixed with this commit: 97b37e2 in branch |
It has been fixed! :) |
Describe the bug
https://github.com/uniartisan/RWKV_Pytorch/blob/dev/train/train-test.py
I have trained using the above code, and the above code operates normally on both CPU and CUDA.
To reproduce the problem, you can use the following steps:
demo.zip
Versions
The text was updated successfully, but these errors were encountered: