- 06 May, 2020 3 commits
-
-
Fabrice Bellet authored
When an existing peer-reflexive remote candidate is updated to a server reflexive one, due to the late reception of remove candidates, this update has several consequences on the conncheck list: - pair foundations and priorities must be recomputed - the highest nominated pair may have changed too - this is not strictly required, but some pairs that had *a lower* priority than the previously peer-reflexive nominated pair, had their retransmit flag set to false, for this reason. These pairs may now have *a higher* priority than the newly promoted nominated pair, and it is fair in that case to turn their retransmit flag back to true.
-
Olivier Crête authored
-
Olivier Crête authored
This makes for clearer reports in the CI
-
- 05 May, 2020 5 commits
-
-
Tim-Philipp Müller authored
candidate.c:351:12: warning: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 4/5 has type ‘guint64’ {aka ‘long long unsigned int’} [-Wformat=]
-
Olivier Crête authored
-
Olivier Crête authored
This matches the rest of the tests.
-
Olivier Crête authored
This race condition is hit all the time when running the test under valgrind.
-
Olivier Crête authored
-
- 04 May, 2020 14 commits
-
-
Fabrice Bellet authored
-
Fabrice Bellet authored
-
Fabrice Bellet authored
-
Fabrice Bellet authored
-
Fabrice Bellet authored
Only the newest stun request may need to be retransmitted, according to the pair retransmit flag. This is the first element of the stun_transactions list. Older stun requests are just kept around until their timeout expires, without retransmission. The newest stun request is usually the last one that will timeout. Current code was based on that assumption, causing the pair to fail when the newest stun request timeout expires. This is not always true, and some older stun requests may have a greater timeout delay. So, we should wait until *all* stun requests of a given pair have reached their timeout. We also refactor this part of the code, to handle the first stun and the other stun requests in the same loop.
-
Fabrice Bellet authored
This constraint is added to handle the situation where the agent runs on a box doing SNAT on one of its outgoing network interface. The NAT does usually its best to ensure that source port number is preserved on the external NAT address and port. This is called "port preservation" in RFC 4787. When two local host candidates are allowed to have the same source port number, we increase the risk that a first local host candidate *is* the NAT mapping address and port of a second local host candidate, because of the "port preservation" effect. When it happens, a server reflexive candidate and a host candidate will have the same address and port. For that situation to happen, a stun request must be emitted from the internal address first, the NAT mapping doing the port preservation will be created for the internal address, and when a stun request is sent from the external address thereafter, a new NAT mapping will be created, but without port preservation, because the previous mapping already took that reservation. The problem will occur on the remote agent, when receiving a stun request from this address and port, that has no way to know wheather it comes from the host or the server reflexive candidate, if both have been advertised remotely, resulting in pair type mislabelling. This case may happen more easily when a source port range is reduced.
-
Fabrice Bellet authored
When remote tcp candidates are received late after the conncheck has started, RFC 6554 suggests that we switch the nomination mode back from aggressive to regular. The problem is that some stun requests may already be inflight with the use-candidate stun flag set, and reverting to regular mode in that case is too late, because these inflight requests may nominate a pair on the remote agent, and not on the local agent. We prefer to just ignore remote tcp candidates that are received after the component state has reached state CONNECTING.
-
Fabrice Bellet authored
-
Fabrice Bellet authored
We can accept up to 8 turn servers, with turn preference value starting at zero. Also fix the error message.
-
Fabrice Bellet authored
This value is built from the position in the component turn servers list, and from the base address network interface position in the list of network interfaces. This value is used to ensure a unique candidate priority for each one. Also ensure that the fields that compose the local preference don't overlap, by checking their maximum value. See RFC-8445, section 5.1.2.2 "Guidelines for Choosing Type and Local Preferences".
-
Fabrice Bellet authored
-
Fabrice Bellet authored
The uniqueness of candidate priorities is achieved by the iteration on the ip local addresses for local host candidates, and also on their base address for reflexive and relay candidates. Helper function checking its uniqueness at allocation time is not required anyore. The priority of the stun request (prflx_priority) is built from the priority of the local candidate of the pair, according the RFC 5245, section 7.1.2.1. This priority must be identical to a virtual "local candidate of type peer-reflexive that would be learned as a consequence of a check from this local candidate." Outgoing stun requests will come from local candidates of type host or type relayed. The priority uniqueness of local candidates of type host implies the uniqueness of the computed peer-reflexive priority. And relay local candidates cannot produce a peer-reflexive local candidate by design, so we can safely use their unique local priority too in the stun request.
-
Tim-Philipp Müller authored
Should fix build failures with latest mingw compiler in msys.
-
Tim-Philipp Müller authored
The old one (v8) was removed from the gstreamer registry it seems.
-
- 02 Mar, 2020 10 commits
-
-
Fabrice Bellet authored
The same code to get and validate local and remote candidates from an incoming stun is shared between regular inbound stun, early checks replay, and partially in the local peer-reflexive discovery function. The selection of the matching local and remote candidate from an incoming stun sometimes requires more information than just the local socket, and the sender address and port. It happens more frequently when the port range is reduced, and when the conncheck handles both tcp and udp candidates. To help to disambiguate such situations, we add supplementary checks when two candidates in the list have the same address and and port number: * the type of the socket must compatible with the candidate transport. A socket for a tcp candidate may be active of passive, but also of type "tcp-bsd" when the parent active or passive socket is replaced after a bind() or accept(). It gives several cases. * the remote candidate transport and the local candidate transport must be compatible
-
Fabrice Bellet authored
When the couple (address, port) is identical between two remote candidates, we may have to match a remote candidate based on its socket reliability.
-
Fabrice Bellet authored
Another some rare case, but we may have two local candidates with the same couple (address, port) and a different transport.
-
Fabrice Bellet authored
This is a unix-only test
-
Fabrice Bellet authored
-
Fabrice Bellet authored
The local preference of UDP candidates is (now) determined by the position of the IP address in the list returned by nice_interfaces_get_local_ips().
-
Fabrice Bellet authored
udp candidates failed to call nice_candidate_ip_local_preferences() and were all given the same local preference priority.
-
Fabrice Bellet authored
In a way similar to other stun packets, we add a delay of Timer TA (default is 20 ms) between each refresh candidate processing.
-
Fabrice Bellet authored
Keepalives STUN requests should not be sent for each local host candidate or each selected candidate in the single loop, but with a pacing of at least Timer TA (default is 20 ms) between each request. It remains compatible with a pause of Timer TR (default is 20 seconds) between each keepalives batch.
-
Fabrice Bellet authored
Only a single STUN request should be sent per discovery tick to enforce an overall pacing of 20ms per default between two STUN requests.
-
- 28 Feb, 2020 1 commit
-
-
Stefan Becker authored
This improves commit bd4b4781 There is a second place where this fix is needed.
-
- 17 Feb, 2020 2 commits
-
-
Olivier Crête authored
-
Olivier Crête authored
-
- 14 Feb, 2020 2 commits
-
-
Jakub Adam authored
Ensure that a conncheck reply is coming from an address and port of a known remote candidate and that the type of incoming socket matches that candidate's transport. Attemps to fix a Coverity issue in which no matching remote_candidate gets found for a connectivity reply in conn_check_handle_inbound_stun() (apparently due to transport mismatch), yet priv_map_reply_to_conn_check_request() still successfully matches it with a previous request.
-
Jakub Adam authored
-
- 13 Feb, 2020 3 commits
-
-
Fabrice Bellet authored
We display 32-bit candidate priorities in hexadecimal to identify each 8-bit-aligned field more easily: type preference, local preference, and component id. We display 64-bit pair priority by splitting their local and remote part. See RFC-8445, section 5.1.2.1. "Recommended Formula", and section 6.1.2.3. "Computing Pair Priority and Ordering Pairs".
-
Fabrice Bellet authored
This patch makes IPv6 link-local addresses obtain a unique ice local preference when computing their priority, by stripping the "%<interfacename>" added to them by getnameinfo(). Previously, all these addresses obtained the same preference value, since the whole local ips list was checked without finding a match.
-
Olivier Crête authored
-