Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add docs for RateLimiter #120

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

xialeistudio
Copy link

@xialeistudio xialeistudio commented Mar 7, 2024

Add a RateLimiter quickstart to docs.
#87

Co-authored-by: Dany Gagnon <ddanygagnon@gmail.com>
@mustafasegf
Copy link

hey, while this code compiles, it gives 403 error while trying because the host header isn't modified to one.one.one.one. It would be better for the example to also add

    async fn upstream_request_filter(
        &self,
        _session: &mut Session,
        upstream_request: &mut RequestHeader,
        _ctx: &mut Self::CTX,
    ) -> Result<()> {
        upstream_request.insert_header("Host", "one.one.one.one").unwrap();
        Ok(())
    }

just like in the https://github.com/cloudflare/pingora/blob/36e09ca389dac053948722a8ed24caa011495439/docs/quick_start.md

@xialeistudio
Copy link
Author

hey, while this code compiles, it gives 403 error while trying because the host header isn't modified to one.one.one.one. It would be better for the example to also add

    async fn upstream_request_filter(
        &self,
        _session: &mut Session,
        upstream_request: &mut RequestHeader,
        _ctx: &mut Self::CTX,
    ) -> Result<()> {
        upstream_request.insert_header("Host", "one.one.one.one").unwrap();
        Ok(())
    }

just like in the https://github.com/cloudflare/pingora/blob/36e09ca389dac053948722a8ed24caa011495439/docs/quick_start.md

Thanks to your valuable reminder, I have promptly addressed the issue.

@eaufavor eaufavor added the documentation Improvements or additions to documentation label Mar 18, 2024
@johnhurt
Copy link
Contributor

Nice. I'm marking this one as accepted. We will ingest it internally, and it should show up in our main branch (attributed correctly to you) in our next sync. Thanks!

@johnhurt johnhurt added the Accepted This change is accepted by us and merged to our internal repo label May 24, 2024
@xialeistudio
Copy link
Author

Nice. I'm marking this one as accepted. We will ingest it internally, and it should show up in our main branch (attributed correctly to you) in our next sync. Thanks!

Thanks!

Some(limiter) => {
limiter
}
};
Copy link
Contributor

@palant palant Jun 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don’t modify the table in this way, the better way to implement a “get or insert” operation is:

rate_limiter_map.entry(appid.clone()).insert_or_with(|| Rate::new(Duration::from_secs(1)))

limiter
}
};
rate_limiter.observe(&appid, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually makes little sense. If you do rate limiting by app id, you can just use it as the rate limiter key – no need for an extra table with rate limiter instances per app id (rate limiter storage is more efficient than HashMap). Or you do rate limiting by IP address here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice advice, I have modified the corresponding code.
Thank you!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having implemented rate limiting myself now, it’s rather unlikely that anybody would need more than one rate limiter. It’s built in a way that you can count different data types with it. So you’d do for example:

let mut rate_limiter = RATE_LIMITER.lock().unwrap();
let app_requests = rate_limiter.observe(&appid, 1);
let ip_requests = rate_limiter.observe(&ip_addr, 1);
if app_requests > MAX_REQ_PER_APP || ip_requests > MAX_REQ_PER_IP { // rate limited, return 429
    

update rate_limiter_map initialize code

// global limiter
lazy_static! {
static ref RATE_LIMITER_MAP: Arc<Mutex<HashMap<String, Rate>>> = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don’t need both Arc and Mutex here, the latter is sufficient. A mutex only allows one reference at a time, no point counting it.

limiter
}
};
rate_limiter.observe(&appid, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having implemented rate limiting myself now, it’s rather unlikely that anybody would need more than one rate limiter. It’s built in a way that you can count different data types with it. So you’d do for example:

let mut rate_limiter = RATE_LIMITER.lock().unwrap();
let app_requests = rate_limiter.observe(&appid, 1);
let ip_requests = rate_limiter.observe(&ip_addr, 1);
if app_requests > MAX_REQ_PER_APP || ip_requests > MAX_REQ_PER_IP { // rate limited, return 429
    

server.run_forever();
}

pub struct LB(Arc<LoadBalancer<RoundRobin>>);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn’t very relevant for the docs, but I’m pretty certain that you don’t need reference counting here. LB(LoadBalancer<RoundRobin>) would be sufficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Accepted This change is accepted by us and merged to our internal repo documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants