-
Notifications
You must be signed in to change notification settings - Fork 795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lfs is erasing a block on every append #374
Comments
Hi @keck-in-space, thanks for creating an issue. This sounds like it may be related to #344 and #368. There are some issues around littlefs not knowing if the end block in a file is erased. If you write a file in one pass (open->write->write->write->close), then it should perform the minimum number of erases. But there may be problems if other operations mess with littlefs's knowledge of the state of the file.
Is it possible to remove seek or sync calls between write operations to see if either of those reduces erases? |
Hi @geky! Thanks for the quick response. I was opening and closing the file with each write... I'll see what kind of performance I see if I perform the writes with an open file and get back to you. |
Ah yes, that would cause the extra erases. littlefs doesn't know if an opened file has been erased, so it is conservative and always erases the end block. We can't rely on the erase value of an "erased" block since it's not the same on all storage (an encrypted block device may represent erased as 1s passed through the decryption function, for example). I've thought of adding a flag to file's metadata that indicates if the end block is erased, but we would need to update it before and after we write to the file. Once we've started writing, even if we lose power, we don't want to accidentally try to write the same location again. |
Just wanted to follow up and say you're correct that leaving the file open between writes prevents the extra erase cycles from occurring. Feel free to leave this open if you want to track it for an optimization task, but for my case, the issue has been solved. Thank you! |
Actually, I'm noticing the same issue with syncs... Every time I sync after write, the erase function occurs and a new block is allocated... I'm using the latest commit to master. |
Yeah, this is the issue here #344, described in more detail here #344 (comment) There's a possible optimization that can be done if your file's size is aligned to your prog size, but it's not possible otherwise because padding gets written out. Fixing this may need a more general and complicated solution. |
Hmm. So the flash chip we're using can do 1 byte size read/writes, but if I'm reading your comment correctly, that won't yet solve the issue with a large block size. Also, if I'm reading correctly, the entire file will be lost if a sync or close does not occur before being reset. This is a pretty big issue for us since the block size of 256KB means we will have a very long wait time if we request a sync and then return to write on every call to store log data. |
Yes that is correct. It's interesting this is the first chip I've seen that has both the large (>4KiB) erase size and the byte-level writes. Usually large erases go hand-in-hand with larger writes, which makes the easy optimization not work. But you're right this looks like the case for the S25FL512S part. Unfortunately this will at least need to wait for #372 before I personally can look into implementing a solution. It's been pointed out in a few places that this is a problem for logging use cases. |
Thanks so much for this reply. It helps a lot with understanding the trade-offs for the various flash chip configurations and the way LFS interacts with them. Do you see any optimization issues with chips that have a 4, 8, or 16KB erase size? |
It's hard to say because sometimes an erase takes too long for an application and sometimes it doesn't. 4KiB erases should be ~64x faster than 256KiB erases (assuming the same underlying hardware, which is likely not true). It would still be a big improvement to reuse erases while appending, but it may not be needed if the erase time is acceptable. |
@geky I think spi nand devices commonly have large erase size and byte level writes. I am using a Toshiba device where the page size is 4096 bytes and blocks are 256 Kbytes. Erase size is the block size. However, I can write 1 byte at a time to the page because the page is cached in a buffer on the nand chip until the nand chip receives a command to write the on-chip page buffer to the memory array. I am finding that appending a log file is not what I expected. I think I am observing that every time there is a sync, the entire file is read and then re-written. When the log file was a few 100 bytes this did not really seem to be a problem. But, as the file has grown in size, I have noticed significant periods of time when the file system is doing something. My current method of recording the log is to open the file and then periodically write and sync. So, open-write-write-sync-write-sync- ... I do not expect to close the file because it is the system log I am writing to. So, this file remains open as long as the device is turned on and it receives the log file. I may occasionally open the file a second time using the shell to tail it or otherwise read or inspect the file. In that case, the second open is closed (but the logging file handle remains open). |
Hello,
I am in the process of integrating lfs into my project and I am able to write to a file without too much of an issue, but I am noticing some odd behavior.
Every time I append to a file, an entire block is erased. This is a bit odd since my block size is enormous at 256KB. That's the minimum erase size of the chip, I'm only writing 321 bytes at a time, and the file size is around 47KB... so no additional blocks should be needed unless I'm missing something.
My configuration is below. I'm using the Cypress S25FL512S.
The text was updated successfully, but these errors were encountered: