-
Notifications
You must be signed in to change notification settings - Fork 410
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
A large number of Tensors (>8000) in the graph will trigger an spmd sharding error
#7161
opened May 31, 2024 by
mars1248
torch.matmul output buffer dtype is not respected when output dtype is different from input dtype
#7160
opened May 30, 2024 by
HahTK
Setting FrontEnd attributes for CC ops replica groups in the HLO
#7139
opened May 29, 2024 by
amithrm
Saving checkpoint silently hangs when including nn.Module in params
#7123
opened May 28, 2024 by
dead-water
Why does my 3-layer linear graph need to output two Transposes?
#7103
opened May 23, 2024 by
mars1248
[torchbench]
timm_nfnet
training failing on non-dynamo.
xla:gpu
#7084
opened May 20, 2024 by
ysiraichi
Mismatch between XLA Tensor and PyTorch Native Tensor Results for
torch.matmul
in FP16 Precision on NVIDIA GPU
#7077
opened May 17, 2024 by
lausannel
Export nn.Module.forward with kwargs to StableHLO
stablehlo
StableHLO related work
#7056
opened May 13, 2024 by
johnmatter
The behavior of
torch.einsum
significantly differs between TPU and other devices.
#7050
opened May 13, 2024 by
jqhoogland
[torchbench] The official benchmark for performance and accuracy check
#7040
opened May 9, 2024 by
shenh10
Migrate PyTorch/XLA's gradient checkpointing to upstream one
nostale
Do not consider for staleness
#7024
opened May 3, 2024 by
JackCaoG
Encountering out-of-memory errors despite using modest model and batch sizes.
#6948
opened Apr 20, 2024 by
seanswyi
Previous Next
ProTip!
no:milestone will show everything without a milestone.