Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make hickory_proto::h3::H3ClientStream Clonable #2182

Merged
merged 1 commit into from May 4, 2024

Conversation

0xffffharry
Copy link
Contributor

make hickory_proto::h3::H3ClientStream Clonable. This can reuse hickory_proto::h3::H3ClientStream.

@djc
Copy link
Collaborator

djc commented Apr 15, 2024

What's the use case exactly? Not sure why we need/want to add a layer of locking here? In general I feel like the rest of the project uses channels to communicate with the task where the connection is held, rather than using locks.

@0xffffharry
Copy link
Contributor Author

0xffffharry commented Apr 15, 2024

This allows H3ClientStream to call poll_closed simultaneously by multiple clones. This also can reuse H3ClientStream with fewer changes.

On the other hand, a better approach is to separate the driver from H3ClientStream. But it requires amount of modification.

@djc
Copy link
Collaborator

djc commented Apr 15, 2024

Unless you have benchmarked this to show what the effect on performance is, I don't think we should merge this in its current form.

@djc
Copy link
Collaborator

djc commented Apr 15, 2024

This allows H3ClientStream to call poll_closed simultaneously by multiple clones. This also can reuse H3ClientStream with fewer changes.

I'm not sure I understand what benefits you're suggesting here -- it definitely isn't an answer to my question about "use cases" -- which are by definition higher level.

@0xffffharry
Copy link
Contributor Author

I will add test cases at some point in the future to compare the performance differences before and after.

@0xffffharry
Copy link
Contributor Author

0xffffharry commented Apr 16, 2024

@djc

I rewrote the code, separating the driver from H3ClientStream with reference to HttpsClientStream. Please review the code.

I have written test code to compare the performance of the code before and after the modifications. I found that the performance difference between before and after the modification is minimal and can be disregarded.

Test Code:

use std::{net::{Ipv4Addr, SocketAddr, SocketAddrV4}, sync::Arc, time::Instant};

use futures::StreamExt;
use hickory_proto::{op::{Message, Query}, rr::RecordType, xfer::{DnsRequest, DnsRequestSender}};
use rand::Rng;


fn get_message() -> Message {
    let query = Query::query("dns.alidns.com".parse().unwrap(), RecordType::A);
    let mut message = Message::new();
    message.set_id(rand::thread_rng().gen());
    message.set_recursion_desired(true);
    message.add_query(query);
    message
}

#[tokio::main]
async fn main() {
    let mut tls_config = hickory_proto::h3::client_config_tls13().unwrap();
    tls_config.dangerous().set_certificate_verifier(Arc::new(DangerousVerifier));

    let mut builder = hickory_proto::h3::H3ClientStream::builder();
    builder.crypto_config(tls_config);

    let mut stream = builder.build(SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(223, 5, 5, 5), 443)), format!("dns.alidns.com")).await.unwrap();

    for i in 0..10 {
        let start = Instant::now();

        let mut resp_stream = stream.send_message(DnsRequest::new(get_message(), Default::default()));

        while let Some(s) = resp_stream.next().await {
            println!("{i}: {:?}", s.is_ok());
        }

        println!("{i}: time: {:?}", start.elapsed());
    }
}

struct DangerousVerifier;

impl rustls::client::ServerCertVerifier for DangerousVerifier {
    fn verify_server_cert(
        &self,
        _: &rustls::Certificate,
        _: &[rustls::Certificate],
        _: &rustls::ServerName,
        _: &mut dyn Iterator<Item = &[u8]>,
        _: &[u8],
        _: std::time::SystemTime,
    ) -> Result<rustls::client::ServerCertVerified, rustls::Error> {
        Ok(rustls::client::ServerCertVerified::assertion())
    }

    fn verify_tls12_signature(
        &self,
        _: &[u8],
        _: &rustls::Certificate,
        _: &rustls::DigitallySignedStruct,
    ) -> Result<rustls::client::HandshakeSignatureValid, rustls::Error> {
        Ok(rustls::client::HandshakeSignatureValid::assertion())
    }

    fn verify_tls13_signature(
        &self,
        _: &[u8],
        _: &rustls::Certificate,
        _: &rustls::DigitallySignedStruct,
    ) -> Result<rustls::client::HandshakeSignatureValid, rustls::Error> {
        Ok(rustls::client::HandshakeSignatureValid::assertion())
    }
}

Test Result(ms):

image

@djc
Copy link
Collaborator

djc commented Apr 16, 2024

You still haven't answered my question about use cases.

@0xffffharry
Copy link
Contributor Author

0xffffharry commented Apr 16, 2024

In Section 3.3 "Connection Reuse" of RFC9114 (HTTP/3), it is explained that HTTP/3 connections allow for reuse to send multiple requests simultaneously.

The send_message function in DnsRequestSender requires a mutable reference, but it is not needed in the implementation of H3ClientStream.

impl DnsRequestSender for H3ClientStream {
    // &mut self can be replaced with &self.
    fn send_message(&mut self, mut message: DnsRequest) -> DnsResponseStream {
        if self.is_shutdown {
            panic!("can not send messages after stream is shutdown")
        }

        // per the RFC, a zero id allows for the HTTP packet to be cached better
        message.set_id(0);

        let bytes = match message.to_vec() {
            Ok(bytes) => bytes,
            Err(err) => return err.into(),
        };

        Box::pin(Self::inner_send(
            self.send_request.clone(),
            Bytes::from(bytes),
            Arc::clone(&self.name_server_name),
        ))
        .into()
    }

    fn shutdown(&mut self) {
        self.is_shutdown = true;
    }

    fn is_shutdown(&self) -> bool {
        self.is_shutdown
    }
}

This prevents H3ClientStream from being able to simultaneously invoke send_message to create multiple requests on a single connection.

This modification make H3ClientStream to be cloneable, to solve this problem.

@bluejekyll
Copy link
Member

bluejekyll commented Apr 20, 2024

FWIW, http/2 has similar semantics, and we do have that clonable, so it seems reasonable to me to make h3 clonable as well: https://docs.rs/hickory-proto/latest/hickory_proto/h2/struct.HttpsClientStream.html

I'd be inclined to approve this based on that, @djc

@djc
Copy link
Collaborator

djc commented Apr 21, 2024

That's fair, I guess.

Copy link
Member

@bluejekyll bluejekyll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the discussion, I think this looks good. Could you rebase and fix the rustfmt issue? and then I can merge, @0xffffharry .

@0xffffharry
Copy link
Contributor Author

0xffffharry commented May 2, 2024

Given the discussion, I think this looks good. Could you rebase and fix the rustfmt issue? and then I can merge, @0xffffharry .

Done. @bluejekyll

@bluejekyll bluejekyll merged commit cffc3fa into hickory-dns:main May 4, 2024
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants