c# - Measure RoundTrip TCP latency without changes to application protocol -
is there way (preferably in c#) how regularly measure connection layer latency (roundtrip) without changing application protocol , without creating separate dedicated connection - e.g. using similar syn-ack trick tcping without closing/opening connection?
i'm connecting servers via given ascii based protocol (and using tcp_nodelay). servers send me large amount of discrete messages , i'm regularly sending 'heartbeat' payload (but there no response payload heartbeat). cannot change protocol , in many cases cannot create more 1 physical connection server.
keep in mind tcp windowing, cause issues when trying implement elegant seq/ack solution. (you want sequence, not synchronize)
[edit: snipped overcomplicated , confusing explaination.]
i'd have best way use simple stopwatch method of starting timer, making thin request or poll, , measure time it. if query lightest can make it, should give minimum amount of time can reasonably expect wait, more valuable ping (which can misleading).
if absolutely need network time machine , back, use icmp ping.
Comments
Post a Comment