Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow GC performance with frequent writes #280

Open
greenlava82 opened this issue Jul 28, 2021 · 0 comments
Open

Slow GC performance with frequent writes #280

greenlava82 opened this issue Jul 28, 2021 · 0 comments

Comments

@greenlava82
Copy link

Hi - I have an application where I'm using SPIFFS to write data to a series of log files in append mode, 50bytes at a time, at 10Hz. Logs are rotated out when the files grow to 200kb. I'm using a Logical Page size of 512b, a logical block size of 128kb (131072), read and write caching enabled, and a physical spi flash size of 8Mb. SPIFFS_GC_MAX_RUNS is at 2, SPIFFS_COPY_BUFFER_STACK is 2048. I'm using an SST26 chip right now, but a Winbond W25Q seems to have similar performance

As my file system starts to grow and become dirty, I'm noticing something is garbage collection is taking a very long time, and I don't think it's something in my HAL read/write/erase commands. It seems like spiffs_gc_clean() is spending a lot of time in the while loops and it's making it so that I cannot keep up with my log writing after my File System grows to around 4MB:

In the following test log readout, I'm attempting to write log entries one after another, and reporting the time it takes to make 100 write operations at 256 bytes each (including the supporting GC operations). I know my logs are roughly 50 bytes each, but in the interest of making the test go faster, I used 256 bytes per write:

BytesWritten, 25600, bytes in , 4.58, seconds
BytesWritten, 51200, bytes in , 4.68, seconds
BytesWritten, 76800, bytes in , 5.27, seconds
BytesWritten, 102400, bytes in , 5.49, seconds
BytesWritten, 128000, bytes in , 5.57, seconds
BytesWritten, 153600, bytes in , 5.45, seconds
BytesWritten, 179200, bytes in , 5.53, seconds
BytesWritten, 204800, bytes in , 7.26, seconds
.... /* more of the same for a while /
....
BytesWritten, 2432000, bytes in , 4.62, seconds
BytesWritten, 2457600, bytes in , 4.78, seconds
BytesWritten, 2483200, bytes in , 6.42, seconds
BytesWritten, 2508800, bytes in , 6.24, seconds
BytesWritten, 2534400, bytes in , 5.52, seconds
BytesWritten, 2560000, bytes in , 7.59, seconds
BytesWritten, 2585600, bytes in , 5.57, seconds
BytesWritten, 2611200, bytes in , 10.45, seconds
BytesWritten, 2636800, bytes in , 9.17, seconds
BytesWritten, 2662400, bytes in , 9.12, seconds
.../
more of the same for a while /
...
BytesWritten, 4172800, bytes in , 9.80, seconds
BytesWritten, 4198400, bytes in , 9.80, seconds
BytesWritten, 4224000, bytes in , 11.32, seconds
BytesWritten, 4249600, bytes in , 9.64, seconds
BytesWritten, 4275200, bytes in , 11.36, seconds
BytesWritten, 4300800, bytes in , 11.93, seconds
BytesWritten, 4326400, bytes in , 17.12, seconds
BytesWritten, 4352000, bytes in , 11.21, seconds
BytesWritten, 4377600, bytes in , 10.92, seconds
BytesWritten, 4403200, bytes in , 19.68, seconds
BytesWritten, 4428800, bytes in , 10.66, seconds
BytesWritten, 4454400, bytes in , 10.60, seconds
BytesWritten, 4480000, bytes in , 17.66, seconds
../
more of the same for a while /
...
BytesWritten, 6067200, bytes in , 22.26, seconds
BytesWritten, 6092800, bytes in , 30.55, seconds
BytesWritten, 6118400, bytes in , 30.77, seconds
BytesWritten, 6144000, bytes in , 31.80, seconds
BytesWritten, 6169600, bytes in , 38.70, seconds
BytesWritten, 6195200, bytes in , 32.71, seconds
BytesWritten, 6220800, bytes in , 30.56, seconds
../
more of the same for a while */
...
BytesWritten, 6937600, bytes in , 93.27, seconds
BytesWritten, 6963200, bytes in , 86.35, seconds
BytesWritten, 6988800, bytes in , 87.06, seconds
BytesWritten, 7014400, bytes in , 91.20, seconds
BytesWritten, 7040000, bytes in , 101.00, seconds
BytesWritten, 7065600, bytes in , 139.14, seconds
BytesWritten, 7091200, bytes in , 156.13, seconds

This is very repeatable, and puts a limit on how much of our file system we can expect to be able to fill with logs while still keeping our requires 10Hz.

Has anyone had success using SPIFFS in this sort of applicaiton? Any suggestions for keeping the write times from increasing so much?

thanks
Dan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant