Skip to content

DoS mitigations

Marten Seemann edited this page Oct 28, 2016 · 1 revision

The protocol provides multiple opportunities to DoS the peer, if not properly mitigated. This page lists some of the DoS vulnerabilities we considered and that we implemented a mitigation against.

Cryptography

  • The initial CHLO received by a server must be padded to at least 1024 bytes, to prevent DoS amplification attacks.
  • A crypto message must not have more than 128 parameters, with less than 2k bytes each.
  • When caching the compressed certificate chains, we need to be careful not to blindly take the hashes the client sends as input for the cache, since a client sending random hashes could then either cause RAM usage or clearing of the cache (causing CPU).
  • We store up to MaxUndecryptablePackets = 10 undecryptable packets before the handshake completes, and discard any later.

TODO: STKs

Connection Handling

  • We timeout the connection after a negotiated inactivity. We restrict the negotiation to 5-60sec.
  • We limit the max number of unprocessed packets we queue per session (MaxSessionUnprocessedPackets = DefaultMaxCongestionWindow).
  • TODO: Other timeouts

Stream Handling

  • We limit the max number of streams to MaxStreamsPerConnection|MaxIncomingDynamicStreams = 100.

TODO: write about nil streams (closed streams)

StreamFrame sending

  • We don't buffer data to send separately (e.g. in a ring-buffer) for performance reasons. Therefore we are not vulnerable to RAM-DoS there. Other implementations should take care to not blindly accept the peer's receive flow control window.
  • Our stream frame packing algorithm is O(N).

StreamFrame receiving

Flow control

  • We limit the max flow control window per stream to 1MB, and the connection level window to 1.5MB, to limit the amount of RAM the client can consume. Violating any window causes a connection close.

Overlapping stream data

A StreamFrame contains a chunk of data sent by the application. Since packets arrive out of order, each StreamFrame has a ByteOffset, and it's the task of the QUIC implementation to put the chunks of data in the correct order to pass them up to the application layer. That means that if there's a gap at the beginning of the receive window, it has to cache the incoming StreamFrames until a continuous chunk of data can be assembled.

A client can now send the last byte of the allowed flow control window first, then the last two bytes, next the last three bytes and so on. If the server caches all these StreamFrames, the client can consume O(N^2) memory, where N is the number of bytes of the flow control window (1 MB for Chrome).

This DoS vulnerability can be mitigated by keeping track of the received chunks of data. On arrival of a new StreamFrame one needs to check if the data overlaps with any previously received data. That way it can be guaranteed that the client can only consume O(N) memory.

Gapped stream data

  • TODO: write about gapped stream data

ACK Handling

Optimistic ACK attack

see here

up to QUIC 33: Every packet contains an Entropy Bit. Entropy Bits of all received packets are accumulated and included in an ACK frame. The server checks if the accumulated entropy matches the sent Entropy bits, and closes the connection on an entropy mismatch.

from QUIC 34: The server randomly skips packet numbers. On average, it skips one packet in (TODO: what's the name of the ServerParameter value) packets. If the client ACKs a packet that was skipped, the server closes the connection.

  • We limit the number of skipped packets we track.

ACKing too many packets with an ACK frame

...