Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reverse proxy support #238

Open
subnix opened this issue Mar 20, 2024 · 4 comments
Open

Reverse proxy support #238

subnix opened this issue Mar 20, 2024 · 4 comments
Labels
Niche Niche ideas for stuff that may be of interest to a very small subset of users if at all suggestion New features to think about Unlikely To Be Supported Anytime Soon Suggestions that may not be implemented anytime soon

Comments

@subnix
Copy link

subnix commented Mar 20, 2024

A reverse proxy (eg nginx) usually imposes limits on keep-alive connections, such as requests count and connection age. Once these limits are reached, the proxy sends the last response with the header Connection: close and closes the connection.
Fulcrum doesn't comprehend this behavior:

<BitcoinD.1> Unsupported "Connection: close" header field in response
<BitcoinD.1> TCP BitcoinD.1 (id: 2) NODE:8332: error 1 (The remote host closed the connection)
<BitcoinD.1> Lost connection to bitcoind, will retry every 5 seconds ...
<SynchMempool> 144292: FAIL: bitcoind connection lost
<Controller> Failed to synch blocks and/or mempool

Having a blockchain node run behind a reverse proxy is essential for fault tolerance, hence full support of connection control headers would be greatly appreciated.

@cculianu
Copy link
Owner

cculianu commented Mar 21, 2024

I understand but Fulcrum is 100% a volunteer project and this seems like a very niche requirement. It would take time and dev effort just to setup a test server to test this esoteric setup. And frankly, I don't see any advantage to this configuration.

Can't you just remove those limits from nginx? What's it doing deciding these things anyway for this app? There is literally no advantage in forcing Fulcrum to reconnect to bitcoind after some time or some request count for this application. Just remove the limit.

I have no plans to support this because it requires not only dev time but testing and other assorted hassles and it seems unnecessary from a functional perspective. You gain nothing by configuring nginx in this way. All you are asking for is for someone else to support some esoteric thing, and nothing is gained for the world by this esoteric thing.

@cculianu cculianu added the suggestion New features to think about label Mar 21, 2024
@cculianu
Copy link
Owner

Anyway, thanks for alerting me to this issue. Now I know Fulcrum is ignoring that header and I know that if I have lots of free time and nothing to do with it, there's a small task to work on to get that working .. but the gist of what I'm saying is I won't rush to do this necessarily anytime soon.

Thanks for reporting, however.

@cculianu
Copy link
Owner

cculianu commented Mar 21, 2024

One last comment: The reason I'm being so "difficult" about this is that an assumption of Fulcrum is that the BitcoinD connections are immediately always available. It's sort of an app invariant. If we accept that the connections can go away at any time .. from a functional perspective they do introduce some unnecessary delays, but also I have to write a bunch of code to handle that cleanly and quickly, since a previous app assumption is now changed and it needs to be handled explicitly.

So I would question the wisdom in even configuring nginx in this way, it negatively impacts performance for no good reaason, basically. Every time it drops conn, you have to do a reconnect and it's just wasted CPU cycles and wasted network packets to do that (even if they are local, it's just a waste). Everything upstream of this (such as the internet clients asking for info from bitcoind).. now has to wait while the connection is re-established. Even if the delay is 10msec or 100msec or whatever, it's still an unnecessary delay.

So.. again, this begs the question -- WHY configure nginx in this way? There is literally no advantage to it and only downsides. Nginx should have no business inserting itself between a server-side middleware app like Fulcrum and also another server-side daemon like bitcoind. What does it know? It's not its concern...

And yeah, it would mean I have to write more code than just handling the header since invariants are now changed...

@cculianu cculianu added Niche Niche ideas for stuff that may be of interest to a very small subset of users if at all Unlikely To Be Supported Anytime Soon Suggestions that may not be implemented anytime soon labels Mar 21, 2024
@subnix
Copy link
Author

subnix commented Mar 21, 2024

Can't you just remove those limits from nginx?
What's it doing deciding these things anyway for this app?
Just remove the limit.
So I would question the wisdom in even configuring nginx in this way
WHY configure nginx in this way?

These limits are enforced by nginx itself, not me. The default request count per keep-alive connection (keepalive_requests) is 1000. nginx's documentation explains the reason:

Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended.

Of course, the limit can be raised (not removed), but I think it'll never be enough for Fulcrum. It produces tens of thousands requests and distributes requests among a connection pool uniformly, so all the connections will be closed nearly simultaneously regardless of the limit value. This leads to an empty connection pool. The same thing happens with some kind of connection ttl.

If we accept that the connections can go away at any time

According to HTTP 1.1 standard (RFC 9112):

Connections can be closed at any time, with or without intention. Implementations ought to anticipate the need to recover from asynchronous close events. The conditions under which a client can automatically retry a sequence of outstanding requests are defined in Section 9.2.2 of HTTP.
A client, server, or proxy MAY close the transport connection at any time.
A server SHOULD sustain persistent connections, when possible

So the connection may be closed at any time without a particular reason from the client side of view.
Looks like you rely on a few concrete implementations of HTTP server (Hyrum's law?), but it's not suitable for the whole HTTP world, which is mostly built on nginx. Connection closing, especially the graceful one, doesn't mean that the server is unavailable.

It would take time and dev effort just to setup a test server to test this esoteric setup
I have no plans to support this
I'm saying is I won't rush to do this necessarily anytime soon
I have to write a bunch of code to handle that cleanly and quickly
it would mean I have to write more code

Anyway, I got you. Thank you for your time!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Niche Niche ideas for stuff that may be of interest to a very small subset of users if at all suggestion New features to think about Unlikely To Be Supported Anytime Soon Suggestions that may not be implemented anytime soon
Projects
None yet
Development

No branches or pull requests

2 participants