Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: 翻译latex切分文件是不是切的太细了呀 #1808

Open
whz-pku opened this issue May 18, 2024 · 1 comment
Open

[Bug]: 翻译latex切分文件是不是切的太细了呀 #1808

whz-pku opened this issue May 18, 2024 · 1 comment

Comments

@whz-pku
Copy link

whz-pku commented May 18, 2024

Installation Method | 安装方法与平台

OneKeyInstall (一键安装脚本-windows)

Version | 版本

Latest | 最新版

OS | 操作系统

Windows

Describe the bug | 简述

我发现latex切分后每段就翻译100token左右,感觉完全没有利用gpt的潜力,不知道具体如何修改呀

Screen Shot | 有帮助的截图

image

Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)

No response

@dz306271098
Copy link

现在诸如GPT-4o基本都有128K的上下文能力,翻译Latex基本可以不用分割那么多段了。建议把分割长度的参数暴露出来。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants