Availability is traditionally assessed based on the assumption that the sole type of random event affecting service is equipment failure. However, when considering data communication networks, other events, such as packet loss rate, latency and jitter also affect the quality of service and need to be taken into account [1]. Quality of service, beyond availability, refers to average delays in receiving messages, or the probability of losing messages. Communication-based Train Control (CBTC) is one of the contemporary solutions for advanced urban mass transit rail systems [2], including some driverless ones. It relies on a train-borne data communication network, a trackside data communication network and radio-based track-to-train communication. Access points (APs) are regularly spaced along the track and allow the train to communicate to the trackside communication network. The data pertaining to automatic train protection and operation are carried by those networks. Those data are safety-critical, as they include the “movement authority”, i.e. how far a train is allowed to proceed under current conditions (trains ahead of it, state of signals, etc.). Other data, such as passenger information, public address, passenger entertainment, usually are carried by a separate, non-safety-related network. However, for cost reduction reasons, there is a trend to merge those two train-borne networks (the safety-related signaling one and the passenger information one) into a single network. Then, the issue arises of guaranteeing nonetheless a sufficient quality of service, i.e. avoiding saturation of the network due to bursts or other phenomena, with resulting loss of messages (packets) or excessive delays. One problem is that the range of frequencies that can be used by the APs is limited. As a result, interferences may happen between trains; these could lead to spurious emergency brakes (i.e. unintended, unwanted occurrence of such braking) and would therefore - dversely impact availability. This paper focuses on quantifying probabilistically how often such potential interference situations, called “internal interferences”, are likely to occur. First, an analysis of the causes of those interferences is performed. Several types of scenarios are investigated. Then a probabilistic model of the various scenarios leading to potential interferences is elaborated. The traffic of trains moving toward a cell is modeled as the arrival process in a queue [5], and the time to cross a cell is modeled as the service process of the queue. Queues with finite buffers, i.e. a maximum allowed number of waiting customers, are considered, to account for the fact that, due to signaling techniques, there is no more than one train in a cell at any given time. Then, the average number of potential simple interferences (between two trains) and multiple interferences (between more than two trains) are derived. They are expressed as functions of the headway (time between two successive trains) and the average speed. The dimensioning factor is the ratio of the time to cross a communication cell (itself a function of the spacing between the APs) to the headway. In this way it can be determined how the communication architecture must be adapted to cope with a growing traffic while retaining a desired quality-of-service target. The adequacy of a frequency plan (i.e. the allocation of frequencies to the various cells) can also be assessed. This type of model can increase the trust that designers and operators may place in a system that depends heavily on data communication networks performance.