We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
platform: Windows 10 optimum version 1.19.2 transformers version 4.40.2 onnx version 1.16.0 onnxruntime version 1.17.3
@amyeroberts @pacman100
examples
I have trained the small-printed trocr on my custom dataset having multiline images. The trained model can read full text. But while converting the model to onnx, the model detects only first line or part of it in first line. I have used this [https://github.com/huggingface/transformers/issues/19811#issuecomment-1303072202](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39) for inference and this command "optimum-cli export onnx -m {model_checkpoints} --task vision2seq-lm onnx/ --atol 1e-3" for convert to onnx
It is unclear why the model recognizes only the first line of text (with almost no loss of quality)
The text was updated successfully, but these errors were encountered:
Hi @feff2, thanks for raising an issue!
I'm transferring this issue to the optimum repo, as it seems this is more related to that library.
Sorry, something went wrong.
@amyeroberts , thanks!
No branches or pull requests
System Info
platform: Windows 10
optimum version 1.19.2
transformers version 4.40.2
onnx version 1.16.0
onnxruntime version 1.17.3
Who can help?
@amyeroberts
@pacman100
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I have trained the small-printed trocr on my custom dataset having multiline images. The trained model can read full text. But while converting the model to onnx, the model detects only first line or part of it in first line. I have used this [https://github.com/huggingface/transformers/issues/19811#issuecomment-1303072202](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39)
for inference and this command "optimum-cli export onnx -m {model_checkpoints} --task vision2seq-lm onnx/ --atol 1e-3" for convert to onnx
Expected behavior
It is unclear why the model recognizes only the first line of text (with almost no loss of quality)
The text was updated successfully, but these errors were encountered: