-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Commit written data automatically #179
Comments
Have you seen/tried my comment here: #144 (comment) Similarly, have you tried setting |
Are you suggesting I run fflush and fsync every time I write anything to the file? Kconfig has the following description for |
Correct, see this comment detailing fflush/fsync.
I think that padding was misunderstanding fflush/fsync and would implicitly trigger a
Correct, since |
Then, regarding my initial question, it is not possible for a "commit to disk" to happen automatically and only when necessary (i.e. when enough bytes have been stored by fprintf/fwrite/etc. that could be written in a LittleFS block without padding with garbage)? I'm not completely sure, but from my understanding, calling fflush (and fsync) every time a few bytes of data are stored would cause a commit that needs padding to reach the size of a block, which wastes time. Wouldn't it be more efficient for the system to just commit data when the size of a block is surpassed? That would definitely be more elegant than calling those functions after every write, or on a timer (and to me it sounds like it would also increase resilience to power loss, without relying on the user to figure it out). |
data is automatically committed to disk as needed; you only need to fflush/fsync/fclose to ensure that the file is in a "good state" that can then be read from again after you lose power. |
What does "automatically" mean if you need to call extra functions after every write? I would expect it to just save the written data directly, or when enough data was written so that it's the most efficient with its block-based system. Say my app is opening a file, and constantly printing data to it, until the end of time. Suddenly, the power is cut. The file now contains nothing, because all prints happened in the RAM buffer, because the user never fsync'd or fclose'd the file. It this normal behavior? Is it normal for the user to have to call those functions manually to ensure their data is actually saved? |
I believe we are miscommunicating around two different things:
Expanding on (2), the point of LittleFS is that the files and filesystem are never in an inconsistent state. I.e. the filesystem won't become corrupt from abrupt power loss. The explicit |
Alright. So if I'm using Then, if |
Ok, I did some testing myself. Just Adding an Edit: with |
Ok, so this is more or less behaving as expected. Upstream LittleFS has performance issues when it comes to appending to a file (or similar). I'd recommend bringing this up in the official repo. This repo's focus is only on the glue between esp-idf's virtual filesystem and LittleFS. |
With the default configuration but Every write takes 0ms, except for the one that has to commit, which takes ~80ms. I can't really complain, since the original prospect was 80ms EVERY write :) |
glad you got it working! |
I am very much aware that LittleFS does not actually write any data to flash until the file is closed, or until sync functions are called. This is obviously a concern for power losses, especially considering this file system is designed to be resilient to such events.
I need to write data to a file for logging, and the system's power may be interrupted at any point. My solution is to set a timer, and so every 250ms I close and reopen the file. Sometimes, this procedure takes longer than usual, which introduces unwanted hiccups.
Have any advances been made towards fixing this issue? Is it not possible to have it automatically write data to disk once the RAM buffer is filled to, let's say, the size of a LittleFS block? I don't understand why the user must save data manually.
The text was updated successfully, but these errors were encountered: