summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKent Overstreet <kent.overstreet@linux.dev>2023-09-11 17:26:07 -0400
committerKent Overstreet <kent.overstreet@linux.dev>2023-09-11 17:26:13 -0400
commit2a2219526e2243d95dd283da73f01d6de2b62a77 (patch)
treef99faabd5cc8dae66149d2e9d4ec509ae6ec02bf
parent31a414aa45a9962946a4390d7a329766a1f9acc3 (diff)
Big update
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-rw-r--r--Caching.mdwn67
-rw-r--r--Compression.mdwn33
-rw-r--r--Contact.mdwn8
-rw-r--r--Encryption.mdwn12
-rw-r--r--ErasureCoding.mdwn77
-rw-r--r--GettingStarted.mdwn29
-rw-r--r--Howto.mdwn38
-rw-r--r--Irc.mdwn3
-rw-r--r--Snapshots.mdwn36
-rw-r--r--Todo.mdwn54
-rw-r--r--index.mdwn128
-rw-r--r--sidebar.mdwn5
12 files changed, 275 insertions, 215 deletions
diff --git a/Caching.mdwn b/Caching.mdwn
new file mode 100644
index 0000000..a124154
--- /dev/null
+++ b/Caching.mdwn
@@ -0,0 +1,67 @@
+# Caching, targets, data placement
+
+bcachefs can be configured for writethrough, writeback, and writearound
+caching, as well as other more specialized setups.
+
+The basic operations (and options) to consider are:
+
+ - Where foreground writes go
+ - Where data is moved to in the background
+ - Where data is promoted to on read
+
+## Target options and disk labels
+
+To configure caching, we first need to be able to refer to one or more devices;
+referring to more than one device requires labelling devices. Labels are paths,
+with dot delimiters, which allows devices to be grouped into a heirarchy.
+
+For example, formatting with the following labels
+
+ bcachefs format \
+ --label=ssd.ssd1 /dev/sda1 \
+ --label=ssd.ssd2 /dev/sdb1 \
+ --label=hdd.hdd1 /dev/sdc1 \
+ --label=hdd.hdd2 /dev/sdd1 \
+
+Then target options could refer to any of:
+
+ `--foreground_target=/dev/sda1`
+ `--foreground_target=ssd `(both sda1 and sdb1)
+ `--foreground_target=ssd.ssd1 `(alias for sda1)
+
+## Caching:
+
+For writeback caching (the most common configuration), we want foreground
+writes to go to the fast device, data to be moved in the background to the slow
+device, and additionally any time we read if the data isn't already on the fast
+device we want a copy to be stored there. Continuing with the previous example,
+you'd use the following options:
+
+ --foreground_target=ssd
+ --background_target=hdd
+ --promote_target=ssd
+
+The rebalance thread will continually move data to the `background_target`
+device(s). When doing so, the copy on the original device will be kept but
+marked as cached; also, when promoting data to the promote target the
+newly-written copy will be marked as cached.
+
+Cached data is evicted as-needed, in standard LRU fashion.
+
+## `data_allowed`
+
+The target options are best-effort; if the specified devices are full the
+allocator will fall back to allocating from any device that has space.
+
+The per-device `data_allowed` option can be used to restrict devices to be used
+for only journal, btree, or user data, and this is a hard restriction.
+
+## `durabiliity`
+
+Some devices may already have internal redundancy, e.g. a hardware raid
+controller. The `durability` option may be used to indicate that a replicas on
+a device should count as being worth `n` replicas towards the desired total.
+
+Also, specifying `--durability=0` allows a device to be used for true
+writethrough caching, where we consider a device to be untrusted: allocations
+will ensure that the device can be yanked at any time without losing data.
diff --git a/Compression.mdwn b/Compression.mdwn
new file mode 100644
index 0000000..92f9fa9
--- /dev/null
+++ b/Compression.mdwn
@@ -0,0 +1,33 @@
+# bcachefs compression design:
+
+Unlike other filesystems that typically do compression at the block level,
+bcachefs does compression at the extent level - variable size chunks, up to (by
+default) 128k.
+
+When reading from extents that are compressed (or checksummed, or encrypted) we
+always have to read the entire extent - but in return we get a better
+compression ratio, smaller metadata, and better performance under typical workloads.
+
+## Available options
+
+The three currently supported algorithms are gzip, lz4, and zstd. Compression
+may be enabled for the entire filesystem (e.g. at format time, or via the
+options directory in sysfs), or on a specific file or directory via the
+`bcachefs setattr` command.
+
+### Compression level
+
+The compression level may also be optionally specified, as an integer between 0
+and 15, e.g. `lz4:15`. 0 specifies the default compression level, 1 specifies
+the fastest and lowest compression ratio, and 15 the slowest and best
+compression ratio.
+
+### Background compression
+
+If the `background_compression` option is used, data will be compressed (or
+recompressed, with different options) in the background by the rebalance
+thread. Like the `compression` option, `background_compression` may be set for
+both the whole filesystem and on individual files or directories.
+
+This lets more aggressive compression be used (e.g. `zstd:15`) without
+bottlenecking foreground writes.
diff --git a/Contact.mdwn b/Contact.mdwn
deleted file mode 100644
index 757ee3a..0000000
--- a/Contact.mdwn
+++ /dev/null
@@ -1,8 +0,0 @@
-#Mailing list
-You can find the official bcache(fs) mailing list on
-[vger.kernel.org](http://vger.kernel.org/vger-lists.html#linux-bcache)
-
-#IRC
-\#bcache on oftc.net
-<iframe src="https://webchat.oftc.net/?randomnick=1&channels=bcache&prompt=1" width="647" height="400"></iframe>
-
diff --git a/Encryption.mdwn b/Encryption.mdwn
index 1426764..807b6d2 100644
--- a/Encryption.mdwn
+++ b/Encryption.mdwn
@@ -1,4 +1,14 @@
-# bcache/bcachefs encryption design:
+# Overview
+
+bcachefs uses AEAD style encryption (ChaCha20/Poly1305), where each encrypted
+block is authenticated with a MAC, with a chain of trust up to root (the
+superblock), and every encrypted block has a unique nonce.
+
+This protects against attacks that block level encryption (i.e. LUKS) cannot
+defend against, because at the block level there's nowhere to store MACs or
+nonces without causing painful alignment problems.
+
+# More detailed:
This document is intended for review by cryptographers and other experience
implementers of cryptography code, before the design is frozen. Everything
diff --git a/ErasureCoding.mdwn b/ErasureCoding.mdwn
new file mode 100644
index 0000000..9b1067a
--- /dev/null
+++ b/ErasureCoding.mdwn
@@ -0,0 +1,77 @@
+
+Erasure coding:
+===============
+
+The term erasure coding refers to the mathematical algorithms for adding
+redundancy to data that allows errors to be corrected: see
+[[https://en.wikipedia.org/wiki/Erasure_code]].
+
+Bcachefs, like most RAID implementations, currently supports Reed-Solomon. We
+might add support for other algorithms in the future, but Reed-Solomon has the
+nice property can always correct up to n errors in a stripe given n redundant
+blocks - no erasure coding algorithm can do better than this. The disadvantage
+of Reed-Solomon is that it always has to read every block within a stripe to fix
+errors - this means that large stripes lead to very slow rebuild times. It might
+be worth investigating other algorithms, like
+[[https://www.usenix.org/legacy/events/fast05/tech/full_papers/hafner_weaver/hafner_weaver.pdf|weaver
+codes]] in the future.
+
+
+Limitations of other implementions:
+-----------------------------------
+
+Conventional software raid suffers from the "raid hole": when writing to a
+single block within a stripe, we have to update the p/q blocks as well. But the
+writes of the p/q blocks are not atomic with the data block write - and can't be
+atomic since they're on different disks (not without e.g. a battery backed up
+journal, as some hardware raid controllers have). This means that there is a
+window of time where the p/q blocks will be inconsistent with the data blocks,
+and if we crash during this window and have to rebuild because one of our drives
+didn't come back, we will rebuild incorrect data (and crucially, we will rebuild
+incorrect data for blocks within the stripe that weren't being written to).
+
+Any software raid implementation that updates existing stripes without doing
+full data journalling is going to suffer from this issue - btrfs is still
+affected by the RAID hole.
+
+ZFS avoids this issue by turning every write into a full stripe - this means
+they never have to update an existing stripe in place. The downside is that
+every write is fragmented across every drive in the RAID set, and this is really
+bad for performance with rotating disks (and even with flash it's not ideal).
+Read performance on rotating disks is dominated by seek time, and fragmenting
+reads means instead of doing one seek we're now doing n seeks, where n is the
+number of disks in the raid set (minus redundancy).
+
+Erasure coding in bcachefs;
+---------------------------
+
+Bcachefs takes advantage of the fact that it is already a copy on write
+filesystem. If we're designing our filesystem to avoid update-in-place, why
+would we do update-in-place in our RAID implementation?
+
+We're able to do this because additionally allocation is bucket based. We divide
+our storage devices up into buckets - typically 512k-2M. When we allocate a
+bucket, we write to it sequentially and then we never overwrite it until the
+entire bucket has been evacuated (or invalidated, if it contained cached data).
+
+Erasure coding in bcachefs works by creating stripes of buckets, one per device.
+Foreground writes are initially replicated, but when erasure coding is enabled
+one of the replicas will be allocated from a bucket in a stripe being newly
+created. When all the data buckets within the new stripe have been written, we
+write out our p/q buckets, then update all the data pointers into that stripe to
+drop their extra replicas and add a reference to the stripe. Buckets within that
+stripe will never be overwritten until the stripe becomes empty and is released.
+
+This has a number of advantages:
+ * No RAID hole
+ * Erasure coding doesn't affect data layout - it doesn't cause writes to be
+ fragmented. Data layout is the same as it would be with no replication or
+ erasure coding.
+ * Since we never update existing stripes, and stripe creation is done once all
+ the data buckets within the stripe are written, the code is vastly simpler
+ and easier to test than other implementations (e.g. Linux md raid5/6).
+
+Disadvantages:
+ * Nontrivial interactions with the copying garbage collector.
+
+
diff --git a/GettingStarted.mdwn b/GettingStarted.mdwn
new file mode 100644
index 0000000..0776bca
--- /dev/null
+++ b/GettingStarted.mdwn
@@ -0,0 +1,29 @@
+# Getting started
+
+Bcachefs is not yet upstream - you'll have to
+[[build a kernel|https://kernelnewbies.org/KernelBuild]] to use it.
+
+First, check out the bcachefs kernel and tools repositories:
+
+ git clone https://evilpiepirate.org/git/bcachefs.git
+ git clone https://evilpiepirate.org/git/bcachefs-tools.git
+
+Build and install as usual - make sure you enable `CONFIG_BCACHEFS_FS`. Then, to
+format and mount a single device with the default options, run:
+
+ bcachefs format /dev/sda1
+ mount -t bcachefs /dev/sda1 /mnt
+
+For a multi device filesystem, with sda1 caching sdb1:
+
+ bcachefs format /dev/sd[ab]1 \
+ --foreground_target /dev/sda1 \
+ --promote_target /dev/sda1 \
+ --background_target /dev/sdb1
+ mount -t bcachefs /dev/sda1:/dev/sdb1 /mnt
+
+This will configure the filesystem so that writes will be buffered to /dev/sda1
+before being written back to /dev/sdb1 in the background, and that hot data
+will be promoted to /dev/sda1 for faster access.
+
+See `bcachefs format --help` for more options.
diff --git a/Howto.mdwn b/Howto.mdwn
deleted file mode 100644
index a55aade..0000000
--- a/Howto.mdwn
+++ /dev/null
@@ -1,38 +0,0 @@
-#Build the bcachefs-tools
-
-Firt run the following command to download the bcachefs-tools.
-
- git clone https://evilpiepirate.org/git/bcachefs-tools.git
-
-This will create a direcorty `bcachefs-tools`.
-In this direcorty you will find a file named INSTALL.
-This file contains the depencies and some instructions.
-
-To instal run as root:
-
- make && make install
-
-#Build the bcachefs-tools statically linked.
-NOTE: Does not appear to work yet.
-
-Follow the steps above.
-But run the following command instead.
-
- CFLAGS="-static "LDFLAGS="-static" make
-
-Please take note that you also need a static version installed of all required libraries.
-
-#Build the bcachefs-kernel branch.
-Firt run the following command to download a kernel branch with the bcachefs patches.
-
- git clone https://evilpiepirate.org/git/bcachefs.git
-
-
-This is slighly more complicated to explain.
-It's best you look up a tutorial for your specific distribution.
-
-During the configuration make sure you enable `CONFIG_BCACHEFS_FS`.
-To check run `grep CONFIG_BCACHEFS_FS .config` in in the kernel source.
-
-Tip if your distro kernel supports it you can extract the `.config` used by your distribution,
-by running `cat /proc/config.gz | gunzip > /tmp/distro.config`.
diff --git a/Irc.mdwn b/Irc.mdwn
new file mode 100644
index 0000000..74b6356
--- /dev/null
+++ b/Irc.mdwn
@@ -0,0 +1,3 @@
+#IRC
+\#bcache on oftc.net
+<iframe src="https://webchat.oftc.net/?randomnick=1&channels=bcache&prompt=1" width="647" height="400"></iframe>
diff --git a/Snapshots.mdwn b/Snapshots.mdwn
index bbd85be..e7e49bd 100644
--- a/Snapshots.mdwn
+++ b/Snapshots.mdwn
@@ -1,6 +1,8 @@
+# Subvolumes and snapshots:
-Snapshots & subvolumes:
-=======================
+bcachefs provides btrfs style writeable snapshots, at subvolume granularity.
+
+# Detailed design:
The short version:
@@ -20,8 +22,7 @@ When we do a lookup for a filesystem item, we have to check if the snapshot ID
of the key we found is an ancestor of the snapshot ID we're searching for, and
filter out items that aren't.
-Subvolumes:
-===========
+# Subvolumes:
Subvolumes are needed for two reasons:
@@ -59,8 +60,7 @@ subvolume roots because otherwise taking a snapshot would require updating every
inode in that subvolume. With these fields and inode backpointers, we'll be able
to reconstruct a path to any directory, or any file that hasn't been hardlinked.
-Snapshots:
-==========
+# Snapshots:
We're also adding another table (btree) for snapshot keys. Snapshot keys form a
tree where each node is just a u32. The btree iterator code that filters by
@@ -112,8 +112,7 @@ the fragment.
Conversely, existing extents may not be merged if one of them is visible in a
child snapshot and the other is not.
-Snapshot deletion:
-==================
+# Snapshot deletion:
In the current design, deleting a snapshot will require walking every btree that
has snapshots (extents, inodes, dirents and xattrs) to find and delete keys with
@@ -123,8 +122,7 @@ We could improve on this if we had a mechanism for "areas of interest" of the
btree - perhaps bloom filter based, and other parts of bcachefs might be able to
benefit as well - e.g. rebalance.
-Other performance considerations:
-=================================
+# Other performance considerations:
Snapshots seem to exist in one of those design spaces where there's inherent
tradeoffs and it's almost impossible to design something that doesn't have
@@ -157,8 +155,7 @@ to the new inode, and mark the original inode to redirect to the new inode so
that the user visible inode number doesn't change. A bit tedious to implement,
but straightforward enough.
-Locking overhead:
-=================
+# Locking overhead:
Every btree transaction that operates within a subvolume (every filesystem level
operation) will need to start by looking up the subvolume in the btree key cache
@@ -172,8 +169,7 @@ table lookups are expected to show up in profiles and be worth optimizing. We
can probably switch to using a flat array to index btree key cache items for the
subvolume btree.
-Permissions:
-============
+# Permissions:
Creating a new empty subvolume can be done by untrusted users anywhere they
could call mkdir().
@@ -182,8 +178,7 @@ Creating a snapshot will also be an untrusted operation - the only additional
requirement being that untrusted users must own the root of the subvolume being
snapshotted.
-Disk space accounting:
-======================
+# Disk space accounting:
We definitely want per snapshot/subvolume disk space accounting. The disk space
accounting code is going to need some major changes in order to make this
@@ -213,8 +208,7 @@ linear chain of nodes:
a -> b -> c -> d -> e
-Recursive snapshots:
-====================
+# Recursive snapshots:
Taking recursive snapshots atomically should be fairly easy, with the caveat
that right now there's a limit on the number of btree iterators that can be used
@@ -222,15 +216,13 @@ simultaneously (64) which will limit the number of subvolumes we can snapshot at
once. Lifting this limit would be useful for other reasons though, so will
probably happen eventually.
-Fsck:
-=====
+# Fsck:
The fsck code is going to have to be reworked to use `BTREE_ITER_ALL_SNAPSHOTS`
and check keys in all snapshots all at once, in order to have acceptable
performance. This has not been started on yet, and is expected to be the
trickiest and most complicated part.
-Quotas:
-=======
+# Quotas:
Todo
diff --git a/Todo.mdwn b/Todo.mdwn
index 5138bb9..06fbdb5 100644
--- a/Todo.mdwn
+++ b/Todo.mdwn
@@ -1,33 +1,30 @@
-#TODO
+# TODO
## Kernel developers
-### Current priorities
+### P0 features
- * Replication
+ * Disk space accounting rework: now that we've got the btree write buffer, we
+ can store disk space accounting in btree keys (instead of the current hacky
+ mechanism where it's only stored in the journal).
- * Compression is almost done: it's quite thoroughly tested, the only remaining
- issue is a problem with copygc fragmenting existing compressed extents that
- only breaks accounting.
+ This will make it possible to store many more counters - meaning we'll be
+ able to store per-snapshot-id accounting.
- * NFS export support is almost done: implementing i_generation correctly
- required some new transaction machinery, but that's mostly done. What's left
- is implementing a new kind of reservation of journal space for the new, long
- running transactions.
+ We also need to add separate counters for compressed/uncompressed size, and
+ broken out by compression type.
- * Allocation information (currently just bucket generation numbers & priority
- numbers, for LRU caching) needs to be moved into a btree, and we need to
- start persisting actual allocation information so we don't have to walk all
- extents at mount time.
+ It'd also be really nice to add more counters to the inode for
+ compressed/uncompressed size. Right now the inode just has `bi_sectors`,
+ which the ondisk size for fallocate purposes, i.e. discounting replication.
+ We could add something like
- Just moving the existing prios/gens to a btree will be a significant
- improvement - besides getting us incrementally closer to persisting full
- allocation information, the existing code is a rather hacky mechanism dating
- from the early days of bcache and has recently been the source of an annoying
- bug due to the design being a bit fragile, and it'll be a performance
- improvement since it'll get rid of the last source of forced journal flushes.
+ bi_sectors_ondisk_compressed
+ bi_sectors_ondisk_uncompressed
-### Other
+### P1 features
+
+### P2 features
* When we're using compression, we end up wasting a fair amount of space on
internal fragmentation because compressed extents get rounded up to the
@@ -45,9 +42,6 @@
* Do data journalling when we don't have a full block to write? Possible
solution, we want data journalling anyways
- * Inline extents - good for space efficiency for both small files, and
- compression when extents happen to compress particularly well.
-
* Full data journalling - we're definitely going to want this for when the
journal is on an NVRAM device (also need to implement external journalling
(easy), and direct journal on NVRAM support (what's involved here?)).
@@ -55,23 +49,11 @@
Would be good to get a simple implementation done and tested so we know what
the on disk format is going to be.
-### Low priority
-
- * NOCOW
-
-### Optional
-(Will not be implemented by the lead developer.)
-
- * It is possible to add bcache functionality to bcachefs.
- If there is someone who wishes to implement this it would be an ok addition.
- However using it as a filesystem should still be better.
-
## Developers
* End user documentation needs a lot of work - complete man pages, etc.
* bcachefs-tools needs some fleshing out in the --help department
- * Write a tool to benchmark tail-latency.
## Users
diff --git a/index.mdwn b/index.mdwn
index ee126d9..588d1e4 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -9,120 +9,36 @@ from a modern filesystem.
* Full data and metadata checksumming
* Multiple devices
* Replication
-* Erasure coding (only feature not quite stable)
-* Caching
-* Compression
-* Encryption
-* Snapshots
-* Scalable - has been tested to 50+ TB, will eventually scale far higher
+* [[Erasure coding|ErasureCoding]] (not stable)
+* [[Caching, data placement|Caching]]
+* [[Compression]]
+* [[Encryption]]
+* [[Snapshots]]
+* Scalable - has been tested to 100+ TB, expected to scale far higher (testers wanted!)
+* High performance, low tail latency
* Already working and stable, with a small community of users
-We prioritize robustness and reliability over features and hype: we make every
-effort to ensure you won't lose data. It's building on top of a codebase with a
-pedigree - bcache already has a reasonably good track record for reliability
-(particularly considering how young upstream bcache is, in terms of engineer
-man/years). Starting from there, bcachefs development has prioritized
-incremental development, and keeping things stable, and aggressively fixing
-design issues as they are found; the bcachefs codebase is considerably more
-robust and mature than upstream bcache.
+## Philosophy
-Fixing bugs always take priority over features! This means getting features out
-takes longer, but for a filesystem not losing your data is the biggest feature.
-
-Developing a filesystem is also not cheap or quick or easy; we need funding!
-Please chip in on [[Patreon|https://www.patreon.com/bcachefs]] - the Patreon
-page also has more information on the motivation for bcachefs and the state of
-Linux filesystems, as well as some bcachefs status updates and information on
-development.
-
-If you don't want to use Patreon, I'm also happy to take donations via paypal:
-kent.overstreet@gmail.com.
-
-Join us in the bcache IRC channel, we have a small group of bcachefs users and
-testers there: #bcache on OFTC (irc.oftc.net).
-
-## Getting started
-
-Bcachefs is not yet upstream - you'll have to build a kernel to use it.
-
-First, check out the bcache kernel and tools repositories:
-
- git clone https://evilpiepirate.org/git/bcachefs.git
- git clone https://evilpiepirate.org/git/bcachefs-tools.git
-
-Build and install as usual - make sure you enable `CONFIG_BCACHEFS_FS`. Then, to
-format and mount a single device with the default options, run:
-
- bcachefs format /dev/sda1
- mount -t bcachefs /dev/sda1 /mnt
-
-For a multi device filesystem, with sda1 caching sdb1:
-
- bcachefs format /dev/sd[ab]1 \
- --foreground_target /dev/sda1 \
- --promote_target /dev/sda1 \
- --background_target /dev/sdb1
- mount -t bcachefs /dev/sda1:/dev/sdb1 /mnt
-
-This will configure the filesystem so that writes will be buffered to /dev/sda1
-before being written back to /dev/sdb1 in the background, and that hot data
-will be promoted to /dev/sda1 for faster access.
-
-See `bcachefs format --help` for more options.
+We prioritize robustness and reliability over features: we make every effort to
+ensure you won't lose data. It's building on top of a codebase with a pedigree
+- bcache already has a reasonably good track record for reliability Starting
+from there, bcachefs development has prioritized incremental development, and
+keeping things stable, and aggressively fixing design issues as they are found;
+the bcachefs codebase is considerably more robust and mature than upstream
+bcache.
## Documentation
We now have a user manual: [[bcachefs-principles-of-operation.pdf]]
-## Status
-
-Bcachefs can currently be considered beta quality. It has a small pool of
-outside users and has been stable for quite some time now; there's no reason
-to expect issues as long as you stick to the currently supported feature set.
-It's been passing all xfstests for well over a year, and serious bugs are rare
-at this point. However, given that it's still under active development backups
-are a good idea.
-
-### Feature status
-
- - Full data checksumming
-
- Fully supported and enabled by default; checksum errors will cause IOs to be
- retried if there's another replica available.
-
- - Compression
-
- Done - LZ4, gzip and ZSTD are currently supported.
-
- - Multiple device support
-
- Done - you can add and remove devices at runtime while the filesystem is in
- use, migrating data off the device if necessary.
-
- - Tiering/writeback caching:
-
- Bcachefs allows you to specify disks (or groups thereof) to be used for
- three categories of I/O: foreground, background, and promote. Foreground
- devices accept writes, whose data is copied to background devices
- asynchronously, and the hot subset of which is copied to the promote devices
- for performance.
-
- - Replication (i.e. RAID1/10)
-
- Done - you can yank out a disk while a filesystem is in use and it'll keep
- working, transparently handling IO errors. You can then use the rereplicate
- command to write out another copy of all the degraded data to another device.
-
- - Erasure coding
-
- Not quite stable
-
- - [[Encryption]]
+## Contact and support
- Whole filesystem AEAD style encryption (with ChaCha20 and Poly1305) is done
- and merged. I would suggest not relying on it for anything critical until the
- code has seen more outside review, though.
+Developing a filesystem is also not cheap, quick, or easy; we need funding!
+Please chip in on [[Patreon|https://www.patreon.com/bcachefs]]
- - Snapshots
+Join us in the bcache [[IRC|Irc]] channel, we have a small group of bcachefs
+users and testers there: #bcache on OFTC (irc.oftc.net).
- Done, still shaking out a few bugs
+Mailing list: [[https://lore.kernel.org/linux-bcachefs/]], or
+linux-bcachefs@vger.kernel.org.
diff --git a/sidebar.mdwn b/sidebar.mdwn
index 7ad6288..b5aea7f 100644
--- a/sidebar.mdwn
+++ b/sidebar.mdwn
@@ -1,10 +1,6 @@
* [[About]]
* [[Stability]]
* [[Performance]]
-* Support
- * [[FAQ]]
- * [[Howto]]
- * [[Contact]]
* Development
* [[Todo]]
* [[IoTunables]]
@@ -23,3 +19,4 @@
* Other technical docs
* [[StablePages]]
* [[October 2022 talk for Redhat|bcachefs_talk_2022_10.mpv]]
+* [[FAQ]]