summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2018-10-10 12:30:05 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-02-08 11:25:32 +0100
commit826ff799146685450f84e3158ce66499c928c8ea (patch)
treeeafe402da8eea8a11c98ed9959c3e19206b5f0b7
parent29ff723c549906a5d8d64dd406d1b1b4da0eb85b (diff)
inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
commit bf66337140c64c27fa37222b7abca7e49d63fb57 upstream. ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately this integer is currently in a different cache line than skb->next, meaning that we use two cache lines per skb when finding the insertion point. By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields in a single cache line and save precious memory bandwidth. Note that after the fast path added by Changli Gao in commit d6bebca92c66 ("fragment: add fast path for in-order fragments") this change wont help the fast path, since we still need to access prev->len (2nd cache line), but will show great benefits when slow path is entered, since we perform a linear scan of a potentially long list. Also, note that this potential long list is an attack vector, we might consider also using an rb-tree there eventually. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--include/linux/skbuff.h5
1 files changed, 5 insertions, 0 deletions
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 6d39d81d3c38..053bdfb526f7 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -558,6 +558,11 @@ struct sk_buff {
};
struct rb_node rbnode; /* used in netem & tcp stack */
};
+
+ union {
+ int ip_defrag_offset;
+ };
+
struct sock *sk;
struct net_device *dev;