Dive Deeper Into SDR

Dive Deeper Into The World of Software Defined Radio

Software-defined radio (SDR) is a radio communication system where components that have been traditionally implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a personal computer or embedded system. While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which were once only theoretically possible.

A basic SDR system may consist of a personal computer equipped with a sound card, or other analog-to-digital converter, preceded by some form of RF front end. Significant amounts of signal processing are handed over to the general-purpose processor, rather than being done in special-purpose hardware (electronic circuits). Such a design produces a radio which can receive and transmit widely different radio protocols (sometimes referred to as waveforms) based solely on the software used.

Software radios have significant utility for the military and cell phone services, both of which must serve a wide variety of changing radio protocols in real time. In the long term, software-defined radios are expected by proponents like the Wireless Innovation Forum to become the dominant technology in radio communications. SDRs, along with software defined antennas are the enablers of the cognitive radio.

A software-defined radio can be flexible enough to avoid the “limited spectrum” assumptions of designers of previous kinds of radios, in one or more ways including:

  • Spread spectrum and ultra-wideband techniques allow several transmitters to transmit in the same place on the same frequency with very little interference, typically combined with one or more error detection and correction techniques to fix all the errors caused by that interference.
  • Software defined antennas adaptively “lock onto” a directional signal, so that receivers can better reject interference from other directions, allowing it to detect fainter transmissions.
  • Cognitive radio techniques: each radio measures the spectrum in use and communicates that information to other cooperating radios, so that transmitters can avoid mutual interference by selecting unused frequencies. Alternatively, each radio connects to a geolocation database to obtain information about the spectrum occupancy in its location and, flexibly, adjusts its operating frequency and/or transmit power not to cause interference to other wireless services.
  • Dynamic transmitter power adjustment, based on information communicated from the receivers, lowering transmit power to the minimum necessary, reducing the near–far problem and reducing interference to others, and extending battery life in portable equipment.
  • Wireless mesh network where every added radio increases total capacity and reduces the power required at any one node. Each node transmits using only enough power needed for the message to hop to the nearest node in that direction, reducing the near–far problem and reducing interference to others.

Operating principles

Software defined radio concept

Superheterodyne receivers use a VFO (variable-frequency oscillator), mixer, and filter to tune the desired signal to a common IF (intermediate frequency) or baseband. Typically in SDR, this signal is then sampled by the analog-to-digital converter. However, in some applications it is not necessary to tune the signal to an intermediate frequency and the radio frequency signal is directly sampled by the analog-to-digital converter (after amplification).

Real analog-to-digital converters lack the dynamic range to pick up sub-microvolt, nanowatt-power radio signals produced by an antenna. Therefore, a low-noise amplifier must precede the conversion step and this device introduces its own problems. For example, if spurious signals are present (which is typical), these compete with the desired signals within the amplifier’s dynamic range. They may introduce distortion in the desired signals, or may block them completely. The standard solution is to put band-pass filters between the antenna and the amplifier, but these reduce the radio’s flexibility. Real software radios often have two or three analog channel filters with different bandwidths that are switched in and out.

History

The term “digital receiver” was coined in 1970 by a researcher at a United States Department of Defense laboratory. A laboratory called the Gold Room at TRW in California created a software baseband analysis tool called Midas, which had its operation defined in software.

The term “software radio” was coined in 1984 by a team at the Garland, Texas, Division of E-Systems Inc. (now Raytheon) to refer to a digital baseband receiver and published in their E-Team company newsletter. A ‘Software Radio Proof-of-Concept’ laboratory was developed by the E-Systems team that popularized Software Radio within various government agencies. This 1984 Software Radio was a digital baseband receiver that provided programmable interference cancellation and demodulation for broadband signals, typically with thousands of adaptive filter taps, using multiple array processors accessing shared memory.

While working under a US Department of Defense contract at RCA in 1982, Ulrich L. Rohde’s department developed the first SDR, which used the COSMAC (Complementary Symmetry Monolithic Array Computer) chip. Rohde was the first to present on this topic with his highly classified February 1984 talk, “Digital HF Radio: A Sampling of Techniques” at the Third International Conference on HF Communication Systems and Techniques in London.

In 1991, Joe Mitola independently reinvented the term software radio for a plan to build a GSM base station that would combine Ferdensi’s digital receiver with E-Systems Melpar’s digitally controlled communications jammers for a true software-based transceiver. E-Systems Melpar sold the software radio idea to the US Air Force. Melpar built a prototype commanders’ tactical terminal in 1990–1991 that employed Texas Instruments TMS320C30 processors and Harris digital receiver chip sets with digitally synthesized transmission. The Melpar prototype didn’t last long because when E-Systems ECI Division manufactured the first limited production units, they decided to “throw out those useless C30 boards,” replacing them with conventional RF filtering on transmit and receive, reverting to a digital baseband radio instead of the SpeakEasy like IF ADC/DACs of Mitola’s prototype. The Air Force would not let Mitola publish the technical details of that prototype, nor would they let Diane Wasserman publish related software life cycle lessons learned because they regarded it as a “USAF competitive advantage.” So instead, with USAF permission, in 1991 Mitola described the architecture principles without implementation details in a paper, “Software Radio: Survey, Critical Analysis and Future Directions” which became the first IEEE publication to employ the term in 1992. When Mitola presented the paper at the conference, Bob Prill of GEC Marconi began his presentation following Mitola with “Joe is absolutely right about the theory of a software radio and we are building one.” Prill gave a GEC Marconi paper on PAVE PILLAR, a SpeakEasy precursor. SpeakEasy, the military software radio was formulated by Wayne Bonser, then of Rome Air Development Center (RADC), now Rome Labs; by Alan Margulies of MITRE Rome, NY; and then Lt Beth Kaspar, the original DARPA SpeakEasy project manager and by others at Rome including Don Upmal. Although Mitola’s IEEE publications resulted in the largest global footprint for software radio, Mitola privately credits that DoD lab of the 1970s with its leaders Carl, Dave, and John with inventing the digital receiver technology on which he based software radio once it was possible to transmit via software.

A few months after the National Telesystems Conference 1992, in an E-Systems corporate program review, a vice-president of E-Systems Garland Division objected to Melpar’s (Mitola’s) use of the term “software radio” without credit to Garland. Alan Jackson, Melpar VP of marketing at that time, asked the Garland VP if their laboratory or devices included transmitters. The Garland VP said “No, of course not — ours is a software radio receiver”. Al replied “Then it’s a digital receiver but without a transmitter, it’s not a software radio.” Corporate leadership agreed with Al, so the publication stood. Many amateur radio operators and HF radio engineers had realized the value of digitizing HF at RF and of processing it with Texas Instruments TI C30 digital signal processors (DSPs) and their precursors during the 1980s and early 1990s. Radio engineers at Roke Manor in the UK and at an organization in Germany had recognized the benefits of ADC at the RF in parallel, so success has many fathers. Mitola’s publication of software radio in the IEEE opened the concept to the broad community of radio engineers. His May 1995 special issue of the IEEE Communications Magazine with the cover “Software Radio” was regarded as watershed event with thousands of academic citations. Mitola was introduced by Joao da Silva in 1997 at the First International Conference on Software Radio as “godfather” of software radio in no small part for his willingness to share such a valuable technology “in the public interest.”

Perhaps the first software-based radio transceiver was designed and implemented by Peter Hoeher and Helmuth Lang at the German Aerospace Research Establishment (DLR, formerly DFVLR) in Oberpfaffenhofen, Germany, in 1988. Both transmitter and receiver of an adaptive digital satellite modem were implemented according to the principles of a software radio, and a flexible hardware periphery was proposed.

The term “software defined radio” was coined in 1995 by Stephen Blust, who published a request for information from Bell South Wireless at the first meeting of the Modular Multifunction Information Transfer Systems (MMITS) forum in 1996, organized by the USAF and DARPA around the commercialization of their SpeakEasy II program. Mitola objected to Blust’s term, but finally accepted it as a pragmatic pathway towards the ideal software radio. Although the concept was first implemented with an IF ADC in the early 1990s, software-defined radios have their origins in the U.S. and European defense sectors of the late 1970s (for example, Walter Tuttlebee described a VLF radio that used an ADC and an 8085 microprocessor), about a year after the First International Conference in Brussels. One of the first public software radio initiatives was the U.S. DARPA-Air Force military project named SpeakEasy. The primary goal of the SpeakEasy project was to use programmable processing to emulate more than 10 existing military radios, operating in frequency bands between 2 and 2000 MHz. Another SpeakEasy design goal was to be able to easily incorporate new coding and modulation standards in the future, so that military communications can keep pace with advances in coding and modulation techniques.

In 1997, Blaupunkt introduced the term “DigiCeiver” for their new range of DSP-based tuners with Sharx in car radios such as the Modena & Lausanne RD 148.

SpeakEasy phase I

From 1990 to 1995, the goal of the SpeakEasy program was to demonstrate a radio for the U.S. Air Force tactical ground air control party that could operate from 2 MHz to 2 GHz, and thus could interoperate with ground force radios (frequency-agile VHF, FM, and SINCGARS), Air Force radios (VHF AM), Naval Radios (VHF AM and HF SSB teleprinters) and satellites (microwave QAM). Some particular goals were to provide a new signal format in two weeks from a standing start, and demonstrate a radio into which multiple contractors could plug parts and software.

The project was demonstrated at TF-XXI Advanced War-fighting Exercise, and demonstrated all of these goals in a non-production radio. There was some discontent with failure of these early software radios to adequately filter out of band emissions, to employ more than the simplest of interoperable modes of the existing radios, and to lose connectivity or crash unexpectedly. Its cryptographic processor could not change context fast enough to keep several radio conversations on the air at once. Its software architecture, though practical enough, bore no resemblance to any other. The SpeakEasy architecture was refined at the MMITS Forum between 1996 and 1999 and inspired the DoD integrated process team (IPT) for programmable modular communications systems (PMCS) to proceed with what became the Joint Tactical Radio System (JTRS).

The basic arrangement of the radio receiver used an antenna feeding an amplifier and down-converter (see Frequency mixer) feeding an automatic gain control, which fed an analog-to-digital converter that was on a computer VME bus with a lot of digital signal processors (Texas Instruments C40s). The transmitter had digital to analog converters on the PCI bus feeding an up converter (mixer) that led to a power amplifier and antenna. The very wide frequency range was divided into a few sub-bands with different analog radio technologies feeding the same analog to digital converters. This has since become a standard design scheme for wideband software radios.

SpeakEasy phase II

The goal was to get a more quickly reconfigurable architecture, i.e., several conversations at once, in an open software architecture, with cross-channel connectivity (the radio can “bridge” different radio protocols). The secondary goals were to make it smaller, cheaper, and weigh less.

The project produced a demonstration radio only fifteen months into a three-year research project. This demonstration was so successful that further development was halted, and the radio went into production with only a 4 MHz to 400 MHz range.

The software architecture identified standard interfaces for different modules of the radio: “radio frequency control” to manage the analog parts of the radio, “modem control” managed resources for modulation and demodulation schemes (FM, AM, SSB, QAM, etc.), “waveform processing” modules actually performed the modem functions, “key processing” and “cryptographic processing” managed the cryptographic functions, a “multimedia” module did voice processing, a “human interface” provided local or remote controls, there was a “routing” module for network services, and a “control” module to keep it all straight.

The modules are said to communicate without a central operating system. Instead, they send messages over the PCI computer bus to each other with a layered protocol.

As a military project, the radio strongly distinguished “red” (unsecured secret data) and “black” (cryptographically-secured data).

The project was the first known to use FPGAs (field programmable gate arrays) for digital processing of radio data. The time to reprogram these was an issue limiting application of the radio. Today, the time to write a program for an FPGA is still significant, but the time to download a stored FPGA program is around 20 milliseconds. This means an SDR could change transmission protocols and frequencies in one fiftieth of a second, probably not an intolerable interruption for that task.

2000s

The SpeakEasy SDR system in the 1994 uses a Texas Instruments TMS320C30 CMOS digital signal processor (DSP), along with several hundred integrated circuit chips, with the radio filling the back of a truck. By the late 2000s, the emergence of RF CMOS technology made it practical to scale down an entire SDR system onto a single mixed-signal system-on-a-chip, which Broadcom demonstrated with the BCM21551 processor in 2007. The Broadcom BCM21551 has practical commercial applications, for use in 3G mobile phones.

Military usage

United States

The Joint Tactical Radio System (JTRS) was a program of the US military to produce radios that provide flexible and interoperable communications. Examples of radio terminals that require support include hand-held, vehicular, airborne and dismounted radios, as well as base-stations (fixed and maritime).

This goal is achieved through the use of SDR systems based on an internationally endorsed open Software Communications Architecture (SCA). This standard uses CORBA on POSIX operating systems to coordinate various software modules.

The program is providing a flexible new approach to meet diverse soldier communications needs through software programmable radio technology. All functionality and expandability is built upon the SCA.

SDRs flexibility results in expensive complexity, inability to optimize, slower ability to apply the latest technology, and rarely a tactical user need (since all users must pick and stay with the one, same radio if they’re to communicate).

The SCA, despite its military origin, is under evaluation by commercial radio vendors for applicability in their domains. The adoption of general-purpose SDR frameworks outside of military, intelligence, experimental and amateur uses, however, is inherently hampered by the fact that civilian users can more easily settle with a fixed architecture, optimized for a specific function, and as such more economical in mass market applications. Still, software defined radio’s inherent flexibility can yield substantial benefits in the longer run, once the fixed costs of implementing it have gone down enough to overtake the cost of iterated redesign of purpose built systems. This then explains the increasing commercial interest in the technology.

SCA-based infrastructure software and rapid development tools for SDR education and research are provided by the Open Source SCA Implementation – Embedded (OSSIE) project. The Wireless Innovation Forum funded the SCA Reference Implementation project, an open source implementation of the SCA specification. (SCARI) can be downloaded for free.

Amateur and home use

Microtelecom Perseus – a HF SDR for the amateur radio market

A typical amateur software radio uses a direct conversion receiver. Unlike direct conversion receivers of the more distant past, the mixer technologies used are based on the quadrature sampling detector and the quadrature sampling exciter.

The receiver performance of this line of SDRs is directly related to the dynamic range of the analog-to-digital converters (ADCs) utilized. Radio frequency signals are down converted to the audio frequency band, which is sampled by a high performance audio frequency ADC. First generation SDRs used a 44 kHz PC sound card to provide ADC functionality. The newer software defined radios use embedded high performance ADCs that provide higher dynamic range and are more resistant to noise and RF interference.

A fast PC performs the digital signal processing (DSP) operations using software specific for the radio hardware. Several software radio implementations use the open source SDR library DttSP.

The SDR software performs all of the demodulation, filtering (both radio frequency and audio frequency), and signal enhancement (equalization and binaural presentation). Uses include every common amateur modulation: Morse code, Single-sideband modulation, Frequency modulation, Amplitude modulation, and a variety of digital modes such as Radioteletype, slow-scan television, and packet radio. Amateurs also experiment with new modulation methods: for instance, the DREAM open-source project decodes the COFDM technique used by Digital Radio Mondiale.

There is a broad range of hardware solutions for radio amateurs and home use. There are professional-grade transceiver solutions, e.g. the Zeus ZS-1 or the Flex Radio, home-brew solutions, e.g. PICaSTAR transceiver, the SoftRock SDR kit, and starter or professional receiver solutions, e.g. the FiFi SDR (Fichten Field Day Radio) Kit for shortwave, or the Quadrus SDR | Phase-coherent multi-channel SDR receiver for short wave or VHF/UHF in direct digital mode of operation.

RTL-SDR

Internals of a low-cost DVB-T USB dongle that uses Realtek RTL2832U (square IC on the right) as the controller and Rafael Micro R820T (square IC on the left) as the tuner.

Eric Fry discovered that some common low-cost DVB-T USB dongles with the Realtek RTL2832U controller and tuner, e.g. the Elonics E4000 or the Rafael Micro R820T, can be used as a wide-band (3 MHz) SDR receiver. Experiments proved the capability of this setup to analyze Perseids meteor shower using Graves radar signals. This project is being maintained at Osmocom.

USRP

More recently, the GNU Radio using primarily the Universal Software Radio Peripheral (USRP) uses a USB 2.0 interface, an FPGA, and a high-speed set of analog-to-digital and digital-to-analog converters, combined with reconfigurable free software. Its sampling and synthesis bandwidth (30-120 MHz) is a thousand times that of PC sound cards, which enables wideband operation.

HPSDR

The HPSDR (High Performance Software Defined Radio) project uses a 16-bit 135 MSPS analog-to-digital converter that provides performance over the range 0 to 55 MHz comparable to that of a conventional analogue HF radio. The receiver will also operate in the VHF and UHF range using either mixer image or alias responses. Interface to a PC is provided by a USB 2.0 interface, although Ethernet could be used as well. The project is modular and comprises a backplane onto which other boards plug in. This allows experimentation with new techniques and devices without the need to replace the entire set of boards. An exciter provides 1/2 W of RF over the same range or into the VHF and UHF range using image or alias outputs.

WebSDR

WebSDR is a project initiated by Pieter-Tjerk de Boer providing access via browser to multiple SDR receivers worldwide covering the complete shortwave spectrum. Recently he has analyzed Chirp Transmitter signals using the coupled system of receivers.

Other applications

On account of its increasing accessibility, with lower cost hardware, more software tools and documentation, the applications of SDR have expanded past their primary and historic use cases. SDR is now being used in areas such as wildlife tracking, radio astronomy, medical imaging research, and art.

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *