diff options
-rw-r--r-- | GettingStarted.mdwn | 2 | ||||
-rw-r--r-- | IoTunables.mdwn | 99 | ||||
-rw-r--r-- | Performance.mdwn | 5 | ||||
-rw-r--r-- | Stability.mdwn | 10 | ||||
-rw-r--r-- | index.mdwn | 6 | ||||
-rw-r--r-- | sidebar.mdwn | 13 |
6 files changed, 8 insertions, 127 deletions
diff --git a/GettingStarted.mdwn b/GettingStarted.mdwn index 0776bca..3501493 100644 --- a/GettingStarted.mdwn +++ b/GettingStarted.mdwn @@ -1,5 +1,3 @@ -# Getting started - Bcachefs is not yet upstream - you'll have to [[build a kernel|https://kernelnewbies.org/KernelBuild]] to use it. diff --git a/IoTunables.mdwn b/IoTunables.mdwn deleted file mode 100644 index 3228fc4..0000000 --- a/IoTunables.mdwn +++ /dev/null @@ -1,99 +0,0 @@ -IO Tunables, Options, and Knobs - -## What there is - -These IO tunables can be set on a per-filesystem basis in -`/sys/fs/bcache/<FSUUID>`. Some can be overridden on a per-inode basis via -extended attributes; the xattr name will be listed in such cases. - -### Metadata tunables - -- `metadata_checksum` - - What checksum algorithm to use for metadata - - Default: `crc32c` - - Valid values: `none`, `crc32c`, `crc64` -- `metadata_replicas` - - The number of replicas to keep for metadata - - Default: `1` -- `metadata_replicas_required` - - The minimum number of replicas to tolerate for metadata - - Mount-only (not in sysfs) - - Default: `1` -- `str_hash` - - The hash algorithm to use for dentries and xattrs - - Default: `siphash` - - Valid values: `crc32c`, `crc64`, `siphash` - -### Data tunables - -- `data_checksum` - - What checksum algorithm to use for data - - Extended attribute: `bcachefs.data_checksum` - - Default: `crc32c` - - Valid values: `none`, `crc32c`, `crc64` -- `data_replicas` - - The number of replicas to keep for data - - Extended attribute: `bcachefs.data_replicas` - - Default: `1` -- `data_replicas_required` - - The minimum number of replicas to tolerate for data - - Mount-only (not in sysfs) - - Default: `1` - -### Foreground tunables - -- `compression` - - What compression algorithm to use for foreground writes - - Extended attribute: `bcachefs.compression` - - Default: `none` - - Valid values: `none`, `lz4`, `gzip`, `zstd` -- `foreground_target` - - Extended attribute: `bcachefs.foreground_target` - - What disk group foreground writes should prefer (may use other disks if - sufficient replicas are not available in-group) - -### Background tunables - -- `background_compression` - - What compression algorithm to recompress to in the background - - Extended attribute: `bcachefs.background_compression` - - Default: `none` - - Valid values: `none`, `lz4`, `gzip`, `zstd` -- `background_target` - - Extended attribute: `bcachefs.background_target` - - What disk group data should be written back to in the background (may - use other disks if sufficient replicas are not available in-group) - -### Promote (cache) tunables - -- `promote_target` - - What disk group data should be copied to when frequently accessed - - Extended attribute: `bcachefs.promote_target` - -## What would be nice - -- Different replication(/raid) settings between FG, BG, and promote - - For example, replicated foreground across all SSDs (durability with low - latency), RAID-6 background (sufficient durability, optimal space - overhead per durability), and RAID-0/single promote (maximum performance, - minimum cache footprint) - -- Different compression for promote - - For example, uncompressed foreground (minimize latency), zstd background - (minimize size), and lz4 promote (save cache space, but prioritize speed) - -- Fine-grained tuning of stripes vs. replicas vs. parity? - - Would allow configuring N stripes + M parity, so trading off throughput - (more stripes) vs. durability (more parity or replicas) vs. space - efficiency (fewer replicas, higher stripes/parity ratio) - - Well defined: (N stripes, R replicas, P parity) = (N * R + P) total - chunks, with each stripe replicated R times, and P parity chunks to - recover from if those are insufficient. - - Can be implemented with P <= 2 for now, possibly extended later using - Andrea Mazzoleni's compatible approach (see [[Todo]], wishlist section) - -- Writethrough caching - - Does this basically obviate 'foreground' as a category, and instead - write directly into background and promote? - - What constraints does this place on (say) parity, given the plan for - it seems to be to restripe and pack as a migration action? diff --git a/Performance.mdwn b/Performance.mdwn deleted file mode 100644 index d0cdfd5..0000000 --- a/Performance.mdwn +++ /dev/null @@ -1,5 +0,0 @@ -## How is performance of bcachefs with VM-disk-images ? - -You can best disable COW to prevent fragmentation. -ADD HOWTO + More information - diff --git a/Stability.mdwn b/Stability.mdwn deleted file mode 100644 index 2c84087..0000000 --- a/Stability.mdwn +++ /dev/null @@ -1,10 +0,0 @@ -bcachefs is stable, and will not eat your data. - -The following tests and testimonials should help in proving this. - -#Stability -##xfstests -PASSED (since around 2016) - -#Testimonials -None yet ;) @@ -30,7 +30,11 @@ bcache. ## Documentation -We now have a user manual: [[bcachefs-principles-of-operation.pdf]] +[[Getting Started|GettingStarted]] + +[[User manual|bcachefs-principles-of-operation.pdf]] + +[[FAQ]] ## Contact and support diff --git a/sidebar.mdwn b/sidebar.mdwn index b5aea7f..fe67d26 100644 --- a/sidebar.mdwn +++ b/sidebar.mdwn @@ -1,11 +1,4 @@ -* [[About]] - * [[Stability]] - * [[Performance]] -* Development - * [[Todo]] - * [[IoTunables]] - * [[TestServerSetup]] -* Bcachefs developer documentation +### bcachefs developer documentation * [[Architecture]] * [[BtreeIterators]] * [[BtreeNodes]] @@ -16,7 +9,7 @@ * [[Allocator]] * [[Fsck]] * [[Roadmap]] -* Other technical docs + * [[Todo]] + * [[TestServerSetup]] * [[StablePages]] * [[October 2022 talk for Redhat|bcachefs_talk_2022_10.mpv]] -* [[FAQ]] |