Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constant high CPU when getting above a certain number of users #1086

Open
Frank-GER opened this issue May 13, 2024 · 3 comments
Open

Constant high CPU when getting above a certain number of users #1086

Frank-GER opened this issue May 13, 2024 · 3 comments

Comments

@Frank-GER
Copy link

Is there a way to tune blockbook/roksdb to better handle a larger amount of users?
I tried increasing the dbcache (cache size 2147483648, max open files 16384), but haven't touched workers yet (mempool: starting with 8*2 sync workers).
Most used requests (>95%) are GetXpubAddress.
Do I need to increase cache and/or workers to better handle that load?
Disk throughput isn't critical but hints to holding more data indexes in memcache might improve the situation.

@martinboehm
Copy link
Contributor

Hi. Tuning workers (blockchain+mempool sync) will not help you to increase the user throughput, they are more or less independent on the number of served users as they sync Blockbook to the backend.

The main load of the GetXpubAddress can be split into two parts:

  1. generation of addresses from xpub - CPU intensive
  2. looking up the transactions in DB for given addresses - CPU and disk intensive.

In general, I do not think there is much more to tune, unfortunately. The cache size could help if the disk throughput would be the limiting factor.

@Frank-GER
Copy link
Author

Understood.
After a certain time response times increase to a point where even a GetSystemInfo can take a few seconds.

There is another point I discovered when testing for a solution:
Even after I switch users to a different server, the CPU load on the first server stays at atleast 50%, all from the blockbook service. Only a restart of the service will bring it back to normal (a few %).

@martinboehm
Copy link
Contributor

@Frank-GER Interesting. Could you please try to identify the issue using profiling?

You can add profiling to Blockbook if you add the flag -prof=127.0.0.1:8335 and restart.
Let it run until it is in the problematic state and them connect to it using go profiler go tool pprof -http=:8336 "http://localhost:8335/debug/pprof/profile?seconds=10". Then on the port 8336 there will be a page containing a profiling info including a very good flame graph. The profiling in my example is set up as if you run everything locally, you will have to sort out networking based on your setup.
I usually resolve the networking using ssh tunelling like ssh -L 8335:localhost:8335 <server> which makes the remote port like if it were open locally on my computer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants