From 2af3e79795d33c77da50bd3d3ba20649ae8fa699 Mon Sep 17 00:00:00 2001 From: Killian De Volder Date: Fri, 31 Mar 2017 16:01:24 +0200 Subject: Split and updated todo into separate page --- Todo.mdwn | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ index.mdwn | 43 +------------------------------------ 2 files changed, 73 insertions(+), 42 deletions(-) create mode 100644 Todo.mdwn diff --git a/Todo.mdwn b/Todo.mdwn new file mode 100644 index 0000000..c568438 --- /dev/null +++ b/Todo.mdwn @@ -0,0 +1,72 @@ +#TODO + +## Kernel developers + +### Current priorities + + * Replication + + * Compression is almost done: it's quite thoroughly tested, the only remaining + issue is a problem with copygc fragmenting existing compressed extents that + only breaks accounting. + + * NFS export support is almost done: implementing i_generation correctly + required some new transaction machinery, but that's mostly done. What's left + is implementing a new kind of reservation of journal space for the new, long + running transactions. + +### Other + + * When we're using compression, we end up wasting a fair amount of space on + internal fragmentation because compressed extents get rounded up to the + filesystem block size when they're written - usually 4k. It'd be really nice + if we could pack them in more efficiently - probably 512 byte sector + granularity. + + On the read side this is no big deal to support - we have to bounce + compressed extents anyways. The write side is the annoying part. The options + are: + * Buffer up writes when we don't have full blocks to write? Highly + problematic, not going to do this. + * Read modify write? Not an option for raw flash, would prefer it to not be + our only option + * Do data journalling when we don't have a full block to write? Possible + solution, we want data journalling anyways + + * Inline extents - good for space efficiency for both small files, and + compression when extents happen to compress particularly well. + + * Full data journalling - we're definitely going to want this for when the + journal is on an NVRAM device (also need to implement external journalling + (easy), and direct journal on NVRAM support (what's involved here?)). + + Would be good to get a simple implementation done and tested so we know what + the on disk format is going to be. + +### Low priority + + * NOCOW + +### Optional +(Will not be implemented by the lead developer.) + + * It is possible to add bcache functionality to bcachefs. + If there is someone who wishes to implement this it would be an ok addition. + However using it as a filesystem should still be better. + +## Developers + + * bcachefs-tools needs some fleshing out in the --help department + +## Users + + * Benchmark bcachefs performance on different configurations: + (Note that these tests will have to be repeated in the future.) + * SSD + * HDD + * SSD + HDD + + * Update the website / Documentation + * Ask questions (so and we can update the documentation / website). + + diff --git a/index.mdwn b/index.mdwn index f34f035..db5734b 100644 --- a/index.mdwn +++ b/index.mdwn @@ -139,46 +139,5 @@ necessary, but I'm not expecting any more incompatible disk format changes. This will be addressed in the future - mount times will likely be the next big push after the next big batch of on disk format changes. -## Todo list - -### Current priorities: - - * Replication - - * Compression is almost done: it's quite thoroughly tested, the only remaining - issue is a problem with copygc fragmenting existing compressed extents that - only breaks accounting. - - * NFS export support is almost done: implementing i_generation correctly - required some new transaction machinery, but that's mostly done. What's left - is implementing a new kind of reservation of journal space for the new, long - running transactions. - -### Other wishlist items: - - * When we're using compression, we end up wasting a fair amount of space on - internal fragmentation because compressed extents get rounded up to the - filesystem block size when they're written - usually 4k. It'd be really nice - if we could pack them in more efficiently - probably 512 byte sector - granularity. - - On the read side this is no big deal to support - we have to bounce - compressed extents anyways. The write side is the annoying part. The options - are: - * Buffer up writes when we don't have full blocks to write? Highly - problematic, not going to do this. - * Read modify write? Not an option for raw flash, would prefer it to not be - our only option - * Do data journalling when we don't have a full block to write? Possible - solution, we want data journalling anyways - - * Inline extents - good for space efficiency for both small files, and - compression when extents happen to compress particularly well. - - * Full data journalling - we're definitely going to want this for when the - journal is on an NVRAM device (also need to implement external journalling - (easy), and direct journal on NVRAM support (what's involved here?)). - - Would be good to get a simple implementation done and tested so we know what - the on disk format is going to be. +## [[Todo]] list -- cgit v1.2.3