Taking an Inside Look at TDMoIP: A Tutorial
Taking an Inside Look at TDMoIP: A Tutorial
Taking an Inside Look at TDMoIP: A Tutorial
Service providers are presently seeking to increase their profits through low cost deployment of voice and leased line services over more efficient Ethernet and IP infrastructures. At the same time enterprises are looking for ways to take advantage of the promise of convergence by integrating their voice and data networks while preserving their investment in traditional PBX and TDM equipment. The voice-over-IP (VoIP) approach is maturing, but its deployment requires a certain level of investment in new network infrastructure and/or customer premises equipment (CPE).
TDM-over-IP (TDMoIP) is a technology that enables voice and leased-line services such as video and data to be offered inexpensively over service provider IP networks while retaining the reliability and quality of the public switched telephone network (PSTN). In this article, we'll discuss the technical challenges inherent in transporting TDM circuits over IP networks, how TDMoIP technology meets those challenges, and the standards shaping TDMoIP and related technologies.
Challenges of Transporting TDM
Conventional TDM networks are highly deterministic. A source device transmits one or more octets to a destination device via a dedicated-bandwidth channel every 125 μs. The circuit delay through a TDM network is predictably low and constant throughout the life of a connection. Timing is delivered along with the data, and the permitted variability (jitter and wander) of TDM clocks is tightly defined. In addition, the infrastructure supports a rich set of user features via a vast set of signaling protocols.
Packet-switched networks (PSNs), such as IP/multi-protocol label switching (MPLS) systems, are more efficient than TDM networks due to bandwidth sharing. However, this sharing leads to PSNs being inherently non-deterministic.
Packets entering and transiting the network must compete for bandwidth and switch/router ports, leading to packet delay variation (PDV) and lost packets. A source device may inject packets into the network at regular intervals, but the network offers no guarantee that these packets will arrive at the destination edge device spaced at the same intervals, in the same order, or even that they will arrive at all.
In addition, IP networks were designed for transport of arbitrary data. Thus, TDM-related signaling is not supported.
There are two main ways that designers are trying to integrate TDM services into IP-based networks. On one hand, designers can completely replace the TDM network and end-user equipment with a new infrastructure that provides innovative mechanisms for voice transport and signaling. The other approach leaves the end-user equipment and protocols intact, tunneling TDM data through the packet network.
In the end, this second approach could provide an easier and most cost-effective migration path for carriers and equipment vendors. With that in mind, let's dive into how TDMoIP works.
Diving into TDMoIP
TDMoIP emulates T1, E1, T3, E3, and N*64K links by adapting and encapsulating the TDM traffic at the network ingress. Adaptation denotes mechanisms that modify the payload to enable its proper restoration at the PSN egress. By using proper adaptation, the TDM signaling and timing can be recovered, and a certain amount of packet loss can be accommodated.
Encapsulation signifies placing the adapted payload into packets of the format required by the underlying PSN technology. TDMoIP encapsulations are presently defined for user datagram protocol (UDP)/IP, MPLS, and Layer 2 tunneling protocol (L2TP)/IP networks, and even pure Ethernet can be utilized with minimal adjustments. Let's take a closer look at adaptation and encapsulation.
How Adaptation Works
TDMoIP can utilize several different adaptation techniques, depending on the TDM traffic characteristics. Whenever possible, TDMoIP draws on proven adaptation mechanisms originally developed for ATM. A side benefit of this choice of payload types is simplified interworking with circuit emulation services carried over ATM networks.
For statically allocated, constant bit-rate (CBR) TDM links, TDMoIP employs ATM adaptation layer 1 (AAL1). This mechanism, defined in ITU-T standard I.363.1 and ATM Forum specification atm-vtoa-0078, was developed for carrying CBR services over ATM.
AAL1 operates by segmenting the continuous stream of TDM data into small 48-byte cells and inserting sequencing, timing, error recovery, and synchronization information into them. For example, if the original TDM stream consisted of a DS1 with channel associated signaling (CAS), the AAL1 adaptation inserts a pointer to the beginning of the next superframe. Thus, even if cells are lost, the pointer will enable recovery from the next superframe.
TDMoIP allows concatenation of any number of AAL1 cells into a packet (note that these are AAL1 cells and not ATM cells, i.e. they do not include the five-byte “cell tax”). By allowing multiple cells per packet, TDMoIP facilitates flexible tradeoffs of buffering delay (which decreases with fewer cells per packet) for bandwidth efficiency (which increases with more cells per packet, due to the per packet overhead).
For dynamically allocated TDM links, whether the information rate varies due to activation of time slots or due to voice activity detection, TDMoIP employs ATM adaptation layer 2 (AAL2). This mechanism, defined in ITU-T standard I.366.2, was developed for carrying variable bit rate (VBR) services over ATM.
AAL2 operates by buffering each TDM time slot into short minicells, inserting the time slot identifier and length indication, sequencing, and then sending this minicell only if it carries valid information. TDMoIP concatenates the minicells from all active time slots into a single packet.
For time slots carrying high-level data link control (HDLC) data, such as data for common channel signaling (CCS), a special adaptation is provided that spots areas of non-idle data, which can then be directly encapsulated.
Encapsulating TDM Data
In TDMoIP packets, payload information is immediately preceded by a control word. This 32-bit control word, shown in Figure 1 , contains the packet sequence number (needed to detect packet re-ordering and packet loss), the payload type, payload length, and alarm indications.
(1) See phase change memory.
(2) See also PMC ( programmable metallization cell).
(3) (Plug Compatible Manufacturer) An organization that makes a computer or electronic device that is compatible with an existing machine.
(4) (Pulse Code Modulation) The primary way analog audio signals are converted into digital form by taking samples of the waveforms from 8 to 192 thousand times per second (8 to 192 kHz) and recording each sample as a digital number from 8 to 24 bits long (see sampling). PCM data are pure digital audio samples, and they are the underlying data in several music and surround sound formats (see WAV, FLAC, AIFF and surround sound).
Sound Cards Support PCM
For output, a sound card's audio-out port provides an analog signal to the speakers. Compressed formats such as MP3 and AAC are first converted to PCM, and the PCM data are then converted to analog (see D/A converter). Sound cards may also output PCM and other digital signals such as Dolby Digital (see S/PDIF). For input, an analog microphone is plugged into the audio-in port, and the sound card converts the analog signals to PCM.
PCM Ports on A/V Equipment
When ports on set-top boxes and Blu-ray/DVD players are labeled PCM or linear PCM (LPCM), they refer to uncompressed audio channels rather than encoded formats such as Dolby Digital, TrueHD, DTS and DTS-HD. PCM can be mono, stereo or have multiple channels for surround sound. See Bitstream mode and linear PCM.
It Started With the Telcos
PCM was introduced in the U.S. in the early 1960s when the telephone companies began converting voice to digital for transport over intercity trunks. See mu-Law.
Strong Third-Quarter Results
DXC posted earnings of 84 cents a share as revenue fell 14.5% year-on-year to $4.29 billion. Both figures beat consensus estimates. Investors who bought the stock after the report did well so far. The stock is up by around 30% in the quarter. Still, the stock is undervalued, trading at a forward price-to-earnings ratio of 9.79 times.
DXC attributed its stronger adjusted EBIT margin to cost optimization efforts. Book-to-bill of 1.13 times could rise from here. The "new DXC" is gaining momentum and brand recognition:
Source: DXC Q3 Earnings Presentation
Oftentimes, companies in a turnaround spend too much effort on cost-cutting instead of service quality. DXC focused on its customers in the quarter. This led to a stabilization in revenues, earning DXC more work from its existing customer base.
DXC will deliver around ~$550 million in cost savings for the fiscal year. In the current period (Q4), expect margins to expand. EBIT margin of 7% posted last quarter should rise from here. Chief Financial Officer Ken Sharp pointed to the new management team led by its Chief Executive Officer, Mike Salvino, in leading DXC's transformation. The enterprise technology stack performance is at the core of DXC's turnaround:
Source: DXC Q3 Earnings Presentation
Quarterly sequential growth will reverse from negative to positive going into Q3/FY2021. DXC's customer relationship building is paying off. Instead of losing customers and watching businesses shrink, the opposite is now happening. For example, DXC's work is from 55% new work and 45% from renewals. And since the renewals are sole-sourced, profit margins improve.
DXC's growth will come primarily from expanding work from existing customers. Around 20% of the customer base needs to move to the cloud over the next two years. Of the 80% remaining, 60% want to see their technology stock modernized.
E1 protection switch allows the user to connect a single E1 line from the telephone company to an "active", as well as to a "standby" E1 terminal, such as data server / router etc. at the customer premises.
In the event of the failure of the data server(s) / equipment connected to the "A / active" ports, the T1 Protection (Fail-Over) Switch shall automatically switch and connect the T1 line(s) from the telephone company to the data server(s) / equipment connected to "B / standby" ports. This ensures minimum downtime -that would have otherwise occurred due to equipment failure. Enhances the efficiency of that network.
Features and Highlights
Allows the user to connect an E1 line from the Telephone Company and to switch it automatically between an "active" and a "standby" E1 terminal at the customer premises. The user programmable switching criterion may be Loss Of E1 Signal, AIS and Loss Of E1 Frame.
Can accommodate upto four E1 lines - may be used switch between "active" and "standby" E1 terminals connected to upto four (or fewer) E1 lines.
Independent switching for each of the four E1 lines.
User programmable switching criterion - independent for each E1 line.
Built-in real-time clock / real-time logging maintains a history of all events.
Remotely accessible over a TCP-IP networks. Allows the user to access and carry out maintenance, or / and switch the E1 line(s) between the "active" and "standby" E1 terminals, remotely, if required.
Allows the users to install and maintain active/standby/duplicate customer premises data networks/data servers, without bearing the recurring $$ expense of leasing additional expensive E1 lines from the telephone company.
Automatically switches the E1 line from the Telephone Company between the "active" and "standby" E1 equipment at the customer premises, according to the customer-programmed criterion.
Improves equipment and data security.
Allows the user to co-locate the "backup / standby" equipment in a different room/building and prevent any data loss arising out of conditions of natural calamity such as fire, flooding etc.
Increases the reliability of the customer's data/IT networks without having to bear the recurring and additional cost of leasing additional E1 lines from the telephone company. The equipment may be used to create secondary/backup systems at the customer premises to provide virtually uninterrupted service.