Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add control panel allow manage multi vllm instances #4861

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

leiwen83
Copy link
Contributor

@leiwen83 leiwen83 commented May 16, 2024

This PR import controller feature from fastchat, which make vllm could be running with its full openai compatible server of latest features.

  • it inherits original fastchat controller core feature, such like model registry, auto scale, model roll update

  • it keeps controller logic layer at minimal shape, which it would forward whatever user request pass to vllm openai compatible server worker

  • it would also track each worker's unfinished queue length, and adjust the request sending speed accordingly. Thus keep the whole cluster balanced, and at its full serving capability.

  • Currently, only v1/completions and v1/chat/completions are implemented. Along with other three interface to help check controller status.

    • /list_workers: show current registered workers and their status
    • /health: whether current controller is functional
    • /list_models: all registered model names

FILL IN THE PR DESCRIPTION HERE

FIX #4226
Since vllm openai compatible server is evolving rapidly, I think we may need to move the control panel feature we
see from fastchat into vllm itself, to enjoy those feature fastchat brought to us along with latest vllm functions.

This PR import controller feature from fastchat, which make vllm
could be running with its full openai compatible server of latest
features.

* it inherits original fastchat controller core feature, such like
  model registry, auto scale, model roll update

* it keeps controller logic layer at minimal shape, which it would
  forward whatever user request pass to vllm openai compatible server
  worker

* it would also track each worker's unfinished queue length, and adjust
  the request sending speed accordingly. Thus keep the whole cluster
  balanced, and at its full serving capability.

* Currently, only v1/completions and v1/chat/completions are implemented.
  Along with other three interface to help check controller status.
    * /list_workers: show current registered workers and their status
    * /health: whether current controller is functional
    * /list_models: all registered model names
@simon-mo
Copy link
Collaborator

Can you open an RFC for this for design discussion?

@robertgshaw2-neuralmagic
Copy link
Collaborator

It would be good to have an open discussion about this feature as there is a real debate about what should be in or out of scope for vllm

@tdene
Copy link

tdene commented May 17, 2024

Scope or not, there's no point to porting over FastChat's Python controller implementation, it's literally 1000x slower at scale than 1 day's worth of Rust code.

@leiwen83
Copy link
Contributor Author

@simon-mo @robertgshaw2-neuralmagic
RFC #4873 is created

@leiwen83
Copy link
Contributor Author

Scope or not, there's no point to porting over FastChat's Python controller implementation, it's literally 1000x slower at scale than 1 day's worth of Rust code.

Yep, rust also could be a choice, since fastchat original choose python style and also the rest of vllm, we may head it up with python first, and latter people could improve it if found rust have performance benefit.

@tdene
Copy link

tdene commented May 17, 2024

@leiwen83 I apologize, I did not mean to be rude. The idea is good, this type of controller is very useful.

But I have seen first-hand how incredibly slow FastAPI is at this particular job, even after you eliminate 90% of the current code's overhead through things like using persistent sockets instead of individual calls.

The user experience is bad. It will end up taking more dev time to deal with Issues than it would take to just make it performant from the beginning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: integrated model controller panel support?
6 participants