You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In chat, there should be an agent always refining the user input and structing a plan of execution for it to create an advanced prompt to be sent to the llm. So, the llm always gets a detailed advanced prompt instead of casual user inputs because better prompt results in better output. Since the agents can be powered by different models, it's perfect.
I'm pretty sure everyone is okay with a slightly slower response due to extra activity each time, they will happily embrace it as long as the outputs are on point.
What would you like to see?
In chat, there should be an agent always refining the user input and structing a plan of execution for it to create an advanced prompt to be sent to the llm. So, the llm always gets a detailed advanced prompt instead of casual user inputs because better prompt results in better output. Since the agents can be powered by different models, it's perfect.
I'm pretty sure everyone is okay with a slightly slower response due to extra activity each time, they will happily embrace it as long as the outputs are on point.
Concept:
User-input ---> PEagent ---> chat-llm ---> response
The text was updated successfully, but these errors were encountered: