Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make use of the seed parameter in the LLM-based evaluators using the OpenAI API #7713

Closed
davidsbatista opened this issue May 17, 2024 · 0 comments · Fixed by #7720
Closed
Assignees
Labels
2.x Related to Haystack v2.0 P1 High priority, add to the next sprint topic:eval

Comments

@davidsbatista
Copy link
Contributor

Is your feature request related to a problem? Please describe.
The LLM-based evaluators are non-deterministic, and this can be an issue in replicating and comparing evaluation results.

Describe the solution you'd like
One possible solution is to use the seed parameter when using the OpenAI-based LLMs: https://platform.openai.com/docs/api-reference/chat/create

  • seed: integer or null
  • This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
@davidsbatista davidsbatista self-assigned this May 17, 2024
@shadeMe shadeMe added topic:eval 2.x Related to Haystack v2.0 labels May 17, 2024
@mrm1001 mrm1001 added the P1 High priority, add to the next sprint label May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.x Related to Haystack v2.0 P1 High priority, add to the next sprint topic:eval
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants