You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the user's question is converted into embeddings and entries from a database that have a match are loaded into the context of the AI.
Here is an example conversation that consequently has an incorrect answer.
Q: What products does company XY sell?
A: <List of products>
Q: What is the first product in your answer used for?
The second question contains NO information about the product in the AI's answer so no relevant information is found or WRONG information is loaded and this leads to incorrect or incomplete answers.
A solution to this problem would be to provide the AI with a tool with which it can perform independent searches. The results of that search are added to the context of the AI and highlighted so that the LLM knows that those are the results of its own search query.
The expectation is that this enhancement will significantly improve the quality of such contextual responses/questions.
What do you think of this suggestion?
PS: This project is excellent.
The text was updated successfully, but these errors were encountered:
SamuelEnzi
changed the title
[FEAT]: Loading information using a tool to improve answers.
[FEAT]: Loading additional information using a tool to improve answers.
May 13, 2024
What would you like to see?
Currently, the user's question is converted into embeddings and entries from a database that have a match are loaded into the context of the AI.
Here is an example conversation that consequently has an incorrect answer.
The second question contains
NO
information about the product in the AI's answer so norelevant information is found or WRONG information is loaded
and this leads to incorrect or incomplete answers.A solution to this problem would be to provide the AI with a tool with which it can perform independent searches. The results of that search are added to the context of the AI and highlighted so that the LLM knows that those are the results of its own search query.
The expectation is that this enhancement will significantly improve the quality of such contextual responses/questions.
What do you think of this suggestion?
PS: This project is excellent.
The text was updated successfully, but these errors were encountered: