Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] #1375

Open
MontDawgg opened this issue May 20, 2024 · 0 comments
Open

[Feature] #1375

MontDawgg opened this issue May 20, 2024 · 0 comments

Comments

@MontDawgg
Copy link

2024-05-19_19h44_22
Problem Description

Our chatbox currently encounters frequent issues with Google Gemini's safety settings, leading to erroneous rejections of mild content. This hampers the user experience and disrupts communication. The API often throws safety errors for messages that have negligible harmful content, such as simple greetings, which should not be flagged under categories like "HARM_CATEGORY_SEXUALLY_EXPLICIT" or similar. This overzealous filtering makes the software difficult and frustrating to use, as it falsely treats harmless messages as inappropriate.

Proposed Solution

Implement Google's safety parameters more accurately to reduce erroneous rejections of benign content. Specifically, adjust the sensitivity of the safety ratings to ensure that mild and safe communications are not flagged incorrectly. This adjustment will create a more reliable and user-friendly chat experience by preventing unnecessary interruptions due to overly cautious safety settings.

Additional Context

For reference, please see the attached screenshot that demonstrates a typical error message. This error occurred despite the message content being completely innocuous ("Hello"). The error message details are:

API Error: {"candidates":[{"finishReason":"SAFETY","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"HIGH"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":1670,"totalTokenCount":1670}}

问题描述

我们的聊天框当前经常遇到 Google Gemini 安全设置的问题,导致对轻微内容的错误拒绝。这破坏了用户体验并干扰了沟通。API 经常会为内容极少有害的消息抛出安全错误,比如简单的问候,而这些并不应该被归类为“色情内容”或类似分类。这种过度严格的过滤使得软件难以使用,令人沮丧,因为它错误地将无害的消息视为不恰当内容。

解决思路

更准确地实施 Google 的安全参数,以减少对良性内容的错误拒绝。具体来说,调整安全评级的敏感度,以确保轻微和安全的通信不会被错误标记。这种调整将通过防止由于过于谨慎的安全设置而导致的不必要中断,创建一个更可靠和友好的聊天体验。

附加上下文

请参阅附件截图,该截图显示了典型的错误消息。这种错误发生在消息内容完全无辜的情况下(“Hello”)。错误消息详细信息如下:

API Error: {"candidates":[{"finishReason":"SAFETY","index":0,"safetyRatings":[{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"HIGH"},{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE"},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE"}]}],"usageMetadata":{"promptTokenCount":1670,"totalTokenCount":1670}}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant