High Frequency CMOS Real-time calibration of gain and timing errors in two-channel time-interleaved A/D converters for software defined radio applications By Djamel Haddadi, Integrated Device Technology, Inc. The explosion of mobile data is driving new receiver architectures in communication infrastructure in order to provide higher capacity and more flexibility. These next generation software defined radio systems are based on power efficient RF A/D converters (RF-ADCs) capable of sampling at the antenna while delivering high dynamic range. Such ADCs are designed in very advanced CMOS technologies using time-interleaved (TIADC) architecture to achieve very high sample rates 1. This architecture suffers from timevarying mismatch errors 2 that necessitate real-time calibration. This article describes a novel background calibration method for gain and timing mismatch errors through low complexity digital signal processing algorithms. Mismatch errors in two-channel TIADC An efficient way to double the speed of an ADC is to operate two ADCs in parallel with out of phase sampling clocks. The unavoidable small mismatches between the transfer functions of the sub-ADCs result in spurious tones that significantly degrade the achievable dynamic range. There are four types of error in this kind of ADC: • DC offset error, • Static gain error, • Timing error, • Bandwidth error. The DC offset error is very simple to handle in practice through digital calibration. The bandwidth error is the most difficult to manage and it is usually mitigated through careful design and layout. In this article we will focus on gain and timing error calibration as they are the major contributors to dynamic range loss. Proposed calibration method In practice the Nyquist bandwidth of an ADC is never fully used, and a fraction Figure 1: Frequency plan showing the location of the calibration signal. of it is usually dedicated to the roll-off of the anti-aliasing filter. This free band is exploited to inject a constrained calibration signal. A sine-wave is selected for calibration as it is easy to generate with high spectral purity on which two main constraints are imposed: 1: The amplitude is kept small enough to avoid any impact on the dynamic range while providing enough estimation accuracy. Experiments show that -40 dBFS to -35 dBFS level range provides the best tradeoff for a 14-bit ADC. 2: The frequency is limited to the following discrete values in order to reduce the complexity of the digital signal processing algorithms: (Equation 1) Where Fs is the TIADC sampling frequency, P, K are unsigned integers and S=+-1 depending on the location of the calibration signal with relation to the edge of the Nyquist zone (see Figure 1). This signal can be easily generated on-chip with a fractional-N PLL using the clock of the ADC as a reference signal. By choosing K high enough, the harmonics of the calibration signal will alias outside the useful band which relaxes their filtering requirements. The swing adjustment can be achieved with a programmable attenuator placed at the output of the PLL. If x0 and x1 denote the outputs of the two sub-ADCs with the calibration signal as input, it can be shown using Equation 1 that these two signals are linked by the following expression (the noise has been ignored): (Equation 2) The coefficients h0 and h1 of this linear filtering formula are related explicitly to the gain g and timing Δt errors by: (Equation 3) This nonlinear set of equations can be linearized and inverted by using a first order approximation, given the fact that the mismatch errors are kept small by design. The estimation algorithm consists of three steps: 1: The calibration signal is extracted and cancelled from the output of the sub-ADCs using an LMS algorithm, yielding the discrete-time signals x0 and x1. This algorithm requires a digital cosine/sine 10 Microwave Engineering Europe March-April 2014 www.microwave-eetimes.com

MWEE MARAPR 2014

To see the actual publication please follow the link above