Hannes Tschofenig
2017-11-20 14:17:03 UTC
Hi all,
some of you may have missed the TLS WG meeting last week where we had a
discussion about the Connection ID. The slides can be found at
https://datatracker.ietf.org/meeting/100/materials/slides-100-tls-sessa-connection-id/
and the draft itself is here:
https://tools.ietf.org/html/draft-rescorla-tls-dtls-connection-id-02
I am bringing this to your attention since there have been several
discussions about problems with expired NAT bindings and DTLS on this
mailing list.
Here is the good news: the TLS WG meeting participants expressed strong
consensus to adopt this work.
We are planning to advance the work rapidly given the urgency. There are
open issues, which we plan to address in the next couple of weeks. Then,
we would like to do an online interop test. If you have an
implementation please drop me a private mail.
Ciao
Hannes
PS: Note that the latest DTLS 1.3 spec now includes an optimized record
layer format, which can be used with the connection ID draft. This leads
to a smaller per packet overhead.
some of you may have missed the TLS WG meeting last week where we had a
discussion about the Connection ID. The slides can be found at
https://datatracker.ietf.org/meeting/100/materials/slides-100-tls-sessa-connection-id/
and the draft itself is here:
https://tools.ietf.org/html/draft-rescorla-tls-dtls-connection-id-02
I am bringing this to your attention since there have been several
discussions about problems with expired NAT bindings and DTLS on this
mailing list.
Here is the good news: the TLS WG meeting participants expressed strong
consensus to adopt this work.
We are planning to advance the work rapidly given the urgency. There are
open issues, which we plan to address in the next couple of weeks. Then,
we would like to do an online interop test. If you have an
implementation please drop me a private mail.
Ciao
Hannes
PS: Note that the latest DTLS 1.3 spec now includes an optimized record
layer format, which can be used with the connection ID draft. This leads
to a smaller per packet overhead.