- 12 May, 2020 1 commit
-
-
Olivier Crête authored
-
- 08 May, 2020 9 commits
-
-
Olivier Crête authored
Also, update the RFC numbers that are implemented.
-
Fabrice Bellet authored
In OC2007R2 compatibility mode, we observed the behaviour of a skype turn server, when returning code 300 (try-alternate) stun error on its tls connections. This value is returned apparently when the turn server is overloaded already. We noticed that the actual code in priv_handle_turn_alternate_server() cannot handle a non-udp turn server, because a tcp one would require to create a new socket. But, even when creating such a new socket stack (tcp-bsd socket + pseudossl socket), libnice still fails to establish a new connection to the alternate server on port 443, in a very systematic way. I'm not sure whether this problem is specific to this skype server infrastructure (the skype client fails in a similar way). Anyway, this code path works as expected with a non-microsoft turn server (tested with coturn).
-
Fabrice Bellet authored
A previous commit broke the logic used to start a discovery request for tcp turn servers. The ambiguity came from the distinction between the type of the turn server (turn->type), the compatibility of the transport of the local base candidate (turn_tcp), and the reliability of the underlying tcp socket (reliable_tcp). reliable_tcp indicates whether the turn allocate request should be "framed" in a tcp packet, according to RFC 4571. This is required in OC2007R2 only. This commit also puts the setup of the tcp turn socket in a separate function, because such setup is also required when handling try-alternate (code 300) stun errors on these tcp sockets, where we have to setup a new connection to another tcp turn server.
-
Fabrice Bellet authored
Relay candidates obtained from TLS turn server don't have to be refreshed in OC2007R2 compatibility mode.
-
Fabrice Bellet authored
-
Olivier Crête authored
-
Fabrice Bellet authored
This is more friendly with stun pacing.
-
Fabrice Bellet authored
-
Fabrice Bellet authored
This patch updates the previous commit "agent: stay in aggressive mode after conncheck has started", by accepting to switch from aggressive to regular mode, while no stun request has been sent. It gives the agent some extra delay to still accept remote tcp candidates, after its state already changed from gathering to connecting.
-
- 07 May, 2020 9 commits
-
-
Fabrice Bellet authored
This patch updates the stun timing constants and provides the rationale with the choice of these new values, in the context of the ice connection check algorithm. One important value during the discovery state is the combination of the initial timeout and the number of retransmissions, because this state may complete after the last stun discovery binding request has timed out. With the combination of 500ms and 3 retransmissions, the discovery state is bound to 2000ms to discover server reflexive and relay candidates. The retransmission delay doubles at each retransmission except for the last one. Generally, this state will complete sooner, when all discovery requests get a reply before the timeout. Another mechanism is used during the connection check, where an stun request is sent with an initial timeout defined by : RTO = MAX(500ms, Ta * (number of in-progress + waiting pairs)) with Ta = 20ms The initial timeout is bounded by a minimum value, 500ms, and scales linearly depending of the number of pairs on the way to be emited. The same number of retransmissions than in the discovery state in used during the connection check. The total time to wait for a pair to fail is then RTO + 2*RTO + RTO = 4*RTO with 3 retransmissions. On a typical laptop setup, with a wired and a wifi interface with IPv4/IPv6 dual stack, a link-local and a link-global IPv6 address, a couple a virtual addresses, a server-reflexive address, a turn relay one, we end up with a total of 90 local candidates for 2 streams and 2 components each. The connection checks list includes up to 200 pairs when tcp pairs are discarded, with : <33 in-progress and waiting pairs in 50% cases (RTO = 660ms), <55 in-progress and waiting pairs in 90% cases (RTO = 1100ms), and up to 86 in-progres and waiting pairs (RTO = 1720ms) The number of retransmission of 3 seems to be quite robust to handle sporadic packets loss, if we consider for example a typical packet loss frequency of 1% of the overall packets transmitted. And a relatevely large initial timeout is interesting because it reduces the overall network overhead caused by the stun requests and replies, mesured around 3KB/s during a connection check with 4 components. Finally, the total time to wait until all retransmissions have completed and have timed out (2000ms with an initial timeout of 500ms and 3 retransmissions) gives a bound to the worst network latency we can accept, when no packet is lost on the wire.
-
Fabrice Bellet authored
The way pairs are unfrozen between RFC5245 and RFC8445 changed a bit, and made the code much more simple. Previously pairs were unfrozen "per stream", not they are unfrozen "per foundation". The principle of the priv_conn_check_unfreeze_next function is now to unfreeze one and only one frozen pair per foundation, all components and streams included. The function is now idemporent: calling it when the connchecks still contains waiting pairs does nothing.
-
Fabrice Bellet authored
The new version of the RFC suppressed the difference between reliable and not reliable maximum value for RTO. We choose to keep the value of 100ms that we used previously, which is lower that the recommended value, but could be overriden most of the time, when a significant number of pairs are handled. We also compute exactly the number of in-progress and waiting pairs for all streams of the agent, without relying on the value per-stream, multiplied by the number of active streams.
-
Fabrice Bellet authored
An inbound stun request may come on a tcp pair, whose tcp-active socket has just been created and connected (the local candidate port is zero), but has not caused the creation of a discovered peer-reflexive local candidate (with a non-zero port). This inbound request is stored in an early icheck structure to be replayed later. When being processed after remote creds have been received, we have to find which local candidate it belongs to, by matching with the address only, without the port.
-
Fabrice Bellet authored
An inbound STUN request on a pair having another STUN request already inflight already should generate to new triggered check, no matter the type of the underlying socket.
-
Fabrice Bellet authored
An inbound stun request on a newly discovered pair should trigger a conncheck in the reverse direction, and not promote the pair directly in state succeeded. This is particulary required if the agent is in aggressive controlling mode.
-
Fabrice Bellet authored
Since we keep a relation between a succeeded and its discovered pair, we can just test for the socket associated to a given pair, and eventually follow the link to the parent succeeded pair.
-
Fabrice Bellet authored
Some tcp-active discovered peer-reflexive local candidates may only be recognised by their local socket, if they have the same address and same port. It may happen when a nat generates an identical mapping from two different base local candidates.
-
Fabrice Bellet authored
We may have situation when stun_timer_refresh is called with a significant delay after the current deadline. In the actual situation, this delay is just included to the computation of the new deadline of the next stun retransmission. We think this may lead to unfair situations, where the next deadline may be too short, just to compensate the first deadline that was too long. For example, if a stun request is scheduled with a delay of 200ms for the 2nd transmission, and 400ms for the 3rd transmission, if stun_timer_remainder() is called 300ms after the start of the timer, the second delay will last only 300ms, instead of 400ms.
-
- 06 May, 2020 15 commits
-
-
Fabrice Bellet authored
The port number must be different for all local host candidates, not just in the same component, but across all components and all streams. A candidate ambiguity between a host local host and an identical server reflexive candidate have more unwanted consequences when it concerns two different components, because an inbound stun request may be associated to a wrong component.
-
Olivier Crête authored
-
Olivier Crête authored
Also adds a unit test Fixes #67
-
Fabrice Bellet authored
The refresh list may be modified while being iterated
-
Fabrice Bellet authored
-
Fabrice Bellet authored
-
Olivier Crête authored
This makes clang happy Fixes #100
-
Fabrice Bellet authored
This other rare situation happens when a role conflict is detected by an stun reply message, on a component that already has a nominated pair with a higher priority. In that case, the retransmit flag should be honored, and the pair with "role conflict" should not be retransmitted.
-
Fabrice Bellet authored
When pruning pending checks (after at least one nominated pair has been obtained), some supplementary cases need to be handled, to ensure that the property "all pairs and only the pairs having a higher priority than the nominated pair should have the stun retransmit flag set" remains true during the whole conncheck: - a pair "not to be retransmitted" must be removed from the triggered check list (because a triggered check would create a new stun request, that would defacto ignore the retransmit flag) - an in-progress pair "not to be retransmitted", for which no stun request has been sent (p->stun_transactions == NULL, a transient state) must be removed from the conncheck list, just like a waiting pair. - a failed pair must have its flag "retransmit" updated too, just like another pair, since a failed pair could match an inbound check, and generate a triggered check, based on retransmit flag value : ie only if this pair has a chance to become a better nominated pair. See NICE_CHECK_FAILED case in priv_schedule_triggered_check().
-
Fabrice Bellet authored
The function conn_check_update_retransmit_flag() that was introduced to reenable the retransmit flag on pairs with higher priority than the nominated one can be merged in priv_prune_pending_checks(), and its invocation replaced by conn_check_update_check_list_state_for_ready(). The function priv_prune_pending_checks() can also be tweaked to use the component selected pair priority, instead of getting it from the checklist. This function is called when at least one nominated pair exists, so selected_pair is this nominated pair.
-
Fabrice Bellet authored
It is possible to ignore the creation of a new pair whose priority is lower than the priority of the selected pair, ie the nominated pair with the highest priority. Such pair would be discarded by a call to prune_pending_checks(), and if checked, there state would break the assumption that all pairs with lower priority than the nominated pair are not retransmitted.
-
Fabrice Bellet authored
We use the property that the conncheck list is ordered by pairs priorities, so we don't have to iterate twice.
-
Fabrice Bellet authored
When an existing peer-reflexive remote candidate is updated to a server reflexive one, due to the late reception of remove candidates, this update has several consequences on the conncheck list: - pair foundations and priorities must be recomputed - the highest nominated pair may have changed too - this is not strictly required, but some pairs that had *a lower* priority than the previously peer-reflexive nominated pair, had their retransmit flag set to false, for this reason. These pairs may now have *a higher* priority than the newly promoted nominated pair, and it is fair in that case to turn their retransmit flag back to true.
-
Olivier Crête authored
-
Olivier Crête authored
This makes for clearer reports in the CI
-
- 05 May, 2020 5 commits
-
-
Tim-Philipp Müller authored
candidate.c:351:12: warning: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 4/5 has type ‘guint64’ {aka ‘long long unsigned int’} [-Wformat=]
-
Olivier Crête authored
-
Olivier Crête authored
This matches the rest of the tests.
-
Olivier Crête authored
This race condition is hit all the time when running the test under valgrind.
-
Olivier Crête authored
-
- 04 May, 2020 1 commit
-
-
Fabrice Bellet authored
-