Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt engineer agent #1415

Open
rezzie-rich opened this issue May 16, 2024 · 0 comments
Open

Prompt engineer agent #1415

rezzie-rich opened this issue May 16, 2024 · 0 comments
Labels
enhancement New feature or request feature request

Comments

@rezzie-rich
Copy link

rezzie-rich commented May 16, 2024

What would you like to see?

In chat, there should be an agent always refining the user input and structing a plan of execution for it to create an advanced prompt to be sent to the llm. So, the llm always gets a detailed advanced prompt instead of casual user inputs because better prompt results in better output. Since the agents can be powered by different models, it's perfect.

I'm pretty sure everyone is okay with a slightly slower response due to extra activity each time, they will happily embrace it as long as the outputs are on point.

Concept:

User-input ---> PEagent ---> chat-llm ---> response

@rezzie-rich rezzie-rich added enhancement New feature or request feature request labels May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request
Projects
None yet
Development

No branches or pull requests

1 participant