summaryrefslogtreecommitdiff
path: root/net/ipv6/netfilter
diff options
context:
space:
mode:
authorMichal Kubecek <mkubecek@suse.cz>2016-05-09 11:01:04 +0200
committerJiri Slaby <jslaby@suse.cz>2016-09-29 11:14:19 +0200
commitaf29d5b57acef4c573c8361a509908115b9ced68 (patch)
treeae35552b74232ae65ea0d351447dcd27df023898 /net/ipv6/netfilter
parentdea85278d68898838020b9f9edfa159f0a2b7eea (diff)
net: disable fragment reassembly if high_thresh is set to zero
commit 30759219f562cfaaebe7b9c1d1c0e6b5445c69b0 upstream. Before commit 6d7b857d541e ("net: use lib/percpu_counter API for fragmentation mem accounting"), setting high threshold to 0 prevented fragment reassembly as first fragment would be always evicted before second could be added to the queue. While inefficient, some users apparently relied on it. Since the commit mentioned above, a percpu counter is used for reassembly memory accounting and high batch size avoids taking slow path in most common scenarios. As a result, a whole full sized packet can be reassembled without the percpu counter's main counter changing its value so that even with high_thresh set to 0, fragmented packets can be still reassembled and processed. Add explicit checks preventing reassembly if high threshold is zero. [mk] backport to 3.12 Signed-off-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'net/ipv6/netfilter')
-rw-r--r--net/ipv6/netfilter/nf_conntrack_reasm.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 7cd623588532..c11a40caf5b6 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -569,6 +569,9 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb, u32 user)
if (find_prev_fhdr(skb, &prevhdr, &nhoff, &fhoff) < 0)
return skb;
+ if (!net->nf_frag.frags.high_thresh)
+ return skb;
+
clone = skb_clone(skb, GFP_ATOMIC);
if (clone == NULL) {
pr_debug("Can't clone skb\n");