diff options
author | David S. Miller <davem@davemloft.net> | 2016-11-25 16:26:12 -0500 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-11-25 16:26:12 -0500 |
commit | ca89fa77b4488ecf2e3f72096386e8f3a58fe2fc (patch) | |
tree | 978eb5557eddcc244bc63b1098cdb3975b1dffcd /kernel/bpf/cgroup.c | |
parent | 619228d86b0e32e00758dcf07ca5d06903d9a9d7 (diff) | |
parent | d8c5b17f2bc0de09fbbfa14d90e8168163a579e7 (diff) |
Merge branch 'cgroup-bpf'
Daniel Mack says:
====================
Add eBPF hooks for cgroups
This is v9 of the patch set to allow eBPF programs for network
filtering and accounting to be attached to cgroups, so that they apply
to all sockets of all tasks placed in that cgroup. The logic also
allows to be extendeded for other cgroup based eBPF logic.
Again, only minor details are updated in this version.
Changes from v8:
* Move the egress hooks into ip_finish_output() and ip6_finish_output()
so they run after the netfilter hooks. For IPv4 multicast, add a new
ip_mc_finish_output() callback that is invoked on success by
netfilter, and call the eBPF program from there.
Changes from v7:
* Replace the static inline function cgroup_bpf_run_filter() with
two specific macros for ingress and egress. This addresses David
Miller's concern regarding skb->sk vs. sk in the egress path.
Thanks a lot to Daniel Borkmann and Alexei Starovoitov for the
suggestions.
Changes from v6:
* Rebased to 4.9-rc2
* Add EXPORT_SYMBOL(__cgroup_bpf_run_filter). The kbuild test robot
now succeeds in building this version of the patch set.
* Switch from bpf_prog_run_save_cb() to bpf_prog_run_clear_cb() to not
tamper with the contents of skb->cb[]. Pointed out by Daniel
Borkmann.
* Use sk_to_full_sk() in the egress path, as suggested by Daniel
Borkmann.
* Renamed BPF_PROG_TYPE_CGROUP_SOCKET to BPF_PROG_TYPE_CGROUP_SKB, as
requested by David Ahern.
* Added Alexei's Acked-by tags.
Changes from v5:
* The eBPF programs now operate on L3 rather than on L2 of the packets,
and the egress hooks were moved from __dev_queue_xmit() to
ip*_output().
* For BPF_PROG_TYPE_CGROUP_SOCKET, disallow direct access to the skb
through BPF_LD_[ABS|IND] instructions, but hook up the
bpf_skb_load_bytes() access helper instead. Thanks to Daniel Borkmann
for the help.
Changes from v4:
* Plug an skb leak when dropping packets due to eBPF verdicts in
__dev_queue_xmit(). Spotted by Daniel Borkmann.
* Check for sk_fullsock(sk) in __cgroup_bpf_run_filter() so we don't
operate on timewait or request sockets. Suggested by Daniel Borkmann.
* Add missing @parent parameter in kerneldoc of __cgroup_bpf_update().
Spotted by Rami Rosen.
* Include linux/jump_label.h from bpf-cgroup.h to fix a kbuild error.
Changes from v3:
* Dropped the _FILTER suffix from BPF_PROG_TYPE_CGROUP_SOCKET_FILTER,
renamed BPF_ATTACH_TYPE_CGROUP_INET_{E,IN}GRESS to
BPF_CGROUP_INET_{IN,E}GRESS and alias BPF_MAX_ATTACH_TYPE to
__BPF_MAX_ATTACH_TYPE, as suggested by Daniel Borkmann.
* Dropped the attach_flags member from the anonymous struct for BPF
attach operations in union bpf_attr. They can be added later on via
CHECK_ATTR. Requested by Daniel Borkmann and Alexei.
* Release old_prog at the end of __cgroup_bpf_update rather that at
the beginning to fix a race gap between program updates and their
users. Spotted by Daniel Borkmann.
* Plugged an skb leak when dropping packets on the egress path.
Spotted by Daniel Borkmann.
* Add cgroups@vger.kernel.org to the loop, as suggested by Rami Rosen.
* Some minor coding style adoptions not worth mentioning in particular.
Changes from v2:
* Fixed the RCU locking details Tejun pointed out.
* Assert bpf_attr.flags == 0 in BPF_PROG_DETACH syscall handler.
Changes from v1:
* Moved all bpf specific cgroup code into its own file, and stub
out related functions for !CONFIG_CGROUP_BPF as static inline nops.
This way, the call sites are not cluttered with #ifdef guards while
the feature remains compile-time configurable.
* Implemented the new scheme proposed by Tejun. Per cgroup, store one
set of pointers that are pinned to the cgroup, and one for the
programs that are effective. When a program is attached or detached,
the change is propagated to all the cgroup's descendants. If a
subcgroup has its own pinned program, skip the whole subbranch in
order to allow delegation models.
* The hookup for egress packets is now done from __dev_queue_xmit().
* A static key is now used in both the ingress and egress fast paths
to keep performance penalties close to zero if the feature is
not in use.
* Overall cleanup to make the accessors use the program arrays.
This should make it much easier to add new program types, which
will then automatically follow the pinned vs. effective logic.
* Fixed locking issues, as pointed out by Eric Dumazet and Alexei
Starovoitov. Changes to the program array are now done with
xchg() and are protected by cgroup_mutex.
* eBPF programs are now expected to return 1 to let the packet pass,
not >= 0. Pointed out by Alexei.
* Operation is now limited to INET sockets, so local AF_UNIX sockets
are not affected. The enum members are renamed accordingly. In case
other socket families should be supported, this can be extended in
the future.
* The sample program learned to support both ingress and egress, and
can now optionally make the eBPF program drop packets by making it
return 0.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel/bpf/cgroup.c')
-rw-r--r-- | kernel/bpf/cgroup.c | 167 |
1 files changed, 167 insertions, 0 deletions
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c new file mode 100644 index 000000000000..a0ab43f264b0 --- /dev/null +++ b/kernel/bpf/cgroup.c @@ -0,0 +1,167 @@ +/* + * Functions to manage eBPF programs attached to cgroups + * + * Copyright (c) 2016 Daniel Mack + * + * This file is subject to the terms and conditions of version 2 of the GNU + * General Public License. See the file COPYING in the main directory of the + * Linux distribution for more details. + */ + +#include <linux/kernel.h> +#include <linux/atomic.h> +#include <linux/cgroup.h> +#include <linux/slab.h> +#include <linux/bpf.h> +#include <linux/bpf-cgroup.h> +#include <net/sock.h> + +DEFINE_STATIC_KEY_FALSE(cgroup_bpf_enabled_key); +EXPORT_SYMBOL(cgroup_bpf_enabled_key); + +/** + * cgroup_bpf_put() - put references of all bpf programs + * @cgrp: the cgroup to modify + */ +void cgroup_bpf_put(struct cgroup *cgrp) +{ + unsigned int type; + + for (type = 0; type < ARRAY_SIZE(cgrp->bpf.prog); type++) { + struct bpf_prog *prog = cgrp->bpf.prog[type]; + + if (prog) { + bpf_prog_put(prog); + static_branch_dec(&cgroup_bpf_enabled_key); + } + } +} + +/** + * cgroup_bpf_inherit() - inherit effective programs from parent + * @cgrp: the cgroup to modify + * @parent: the parent to inherit from + */ +void cgroup_bpf_inherit(struct cgroup *cgrp, struct cgroup *parent) +{ + unsigned int type; + + for (type = 0; type < ARRAY_SIZE(cgrp->bpf.effective); type++) { + struct bpf_prog *e; + + e = rcu_dereference_protected(parent->bpf.effective[type], + lockdep_is_held(&cgroup_mutex)); + rcu_assign_pointer(cgrp->bpf.effective[type], e); + } +} + +/** + * __cgroup_bpf_update() - Update the pinned program of a cgroup, and + * propagate the change to descendants + * @cgrp: The cgroup which descendants to traverse + * @parent: The parent of @cgrp, or %NULL if @cgrp is the root + * @prog: A new program to pin + * @type: Type of pinning operation (ingress/egress) + * + * Each cgroup has a set of two pointers for bpf programs; one for eBPF + * programs it owns, and which is effective for execution. + * + * If @prog is %NULL, this function attaches a new program to the cgroup and + * releases the one that is currently attached, if any. @prog is then made + * the effective program of type @type in that cgroup. + * + * If @prog is %NULL, the currently attached program of type @type is released, + * and the effective program of the parent cgroup (if any) is inherited to + * @cgrp. + * + * Then, the descendants of @cgrp are walked and the effective program for + * each of them is set to the effective program of @cgrp unless the + * descendant has its own program attached, in which case the subbranch is + * skipped. This ensures that delegated subcgroups with own programs are left + * untouched. + * + * Must be called with cgroup_mutex held. + */ +void __cgroup_bpf_update(struct cgroup *cgrp, + struct cgroup *parent, + struct bpf_prog *prog, + enum bpf_attach_type type) +{ + struct bpf_prog *old_prog, *effective; + struct cgroup_subsys_state *pos; + + old_prog = xchg(cgrp->bpf.prog + type, prog); + + effective = (!prog && parent) ? + rcu_dereference_protected(parent->bpf.effective[type], + lockdep_is_held(&cgroup_mutex)) : + prog; + + css_for_each_descendant_pre(pos, &cgrp->self) { + struct cgroup *desc = container_of(pos, struct cgroup, self); + + /* skip the subtree if the descendant has its own program */ + if (desc->bpf.prog[type] && desc != cgrp) + pos = css_rightmost_descendant(pos); + else + rcu_assign_pointer(desc->bpf.effective[type], + effective); + } + + if (prog) + static_branch_inc(&cgroup_bpf_enabled_key); + + if (old_prog) { + bpf_prog_put(old_prog); + static_branch_dec(&cgroup_bpf_enabled_key); + } +} + +/** + * __cgroup_bpf_run_filter() - Run a program for packet filtering + * @sk: The socken sending or receiving traffic + * @skb: The skb that is being sent or received + * @type: The type of program to be exectuted + * + * If no socket is passed, or the socket is not of type INET or INET6, + * this function does nothing and returns 0. + * + * The program type passed in via @type must be suitable for network + * filtering. No further check is performed to assert that. + * + * This function will return %-EPERM if any if an attached program was found + * and if it returned != 1 during execution. In all other cases, 0 is returned. + */ +int __cgroup_bpf_run_filter(struct sock *sk, + struct sk_buff *skb, + enum bpf_attach_type type) +{ + struct bpf_prog *prog; + struct cgroup *cgrp; + int ret = 0; + + if (!sk || !sk_fullsock(sk)) + return 0; + + if (sk->sk_family != AF_INET && + sk->sk_family != AF_INET6) + return 0; + + cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data); + + rcu_read_lock(); + + prog = rcu_dereference(cgrp->bpf.effective[type]); + if (prog) { + unsigned int offset = skb->data - skb_network_header(skb); + + __skb_push(skb, offset); + ret = bpf_prog_run_save_cb(prog, skb) == 1 ? 0 : -EPERM; + __skb_pull(skb, offset); + } + + rcu_read_unlock(); + + return ret; +} +EXPORT_SYMBOL(__cgroup_bpf_run_filter); |