Using Rust errors inside a loop is the result of cheap locking, but why?

use std::io::ErrorKind; use std::net::TcpStream; fn main() { let address = "localhost:7000"; loop { match TcpStream::connect(address.clone()) { Err(err) => { match err.kind() { ErrorKind::ConnectionRefused => { continue; }, kind => panic!("Error occurred: {:?}", kind), }; }, Ok(_stream) => { /* do some stuff here */ }, } } } 

Consider the rust code snippet above. I'm not interested in the Ok branch, but rather the ErrorKind::ConnectionRefused branch associated with loop : it is very cheap, processor, consumes less than 1% of the processor. This is great, this is what I want.

But I don’t understand why it is cheap: comparable code in C is likely to consume 100% mostly NOP ing (not exactly, but close enough). Can someone help me understand why it is so cheap?

+6
source share
1 answer

Most likely, connect () is the culprit; To get a Connection rejection, you first need to find the address (which should be cheap for the local host), then connect and wait for the Connection rejection.

While the local host is, of course, quite fast, unlike remote network services, there is still a lot of overhead.

ping localhost has a latency of about 0.9 ms for me. This means that your loop only runs from 1000 to 10000 iterations per second, which is not very strong compared to the actual one, whereas true {} loop.

+1
source

All Articles