Speed Matters: How Ethernet Went From 3 Mbps to 100 Gbps… and Beyond

Archive for November, 2011

Alcatel-Lucent

We’ve seen the future and it’s (still) copper

Alcatel-Lucent is promoting a commercial broadband-over-copper solution. Its new equipment design will deliver better broadband speeds with standard VDSL2 (stands for Very-high-speed Digital Subscriber Line 2) plus vectoring. Alcatel-Lucent says its vectoring approach helps to boost speeds significantly. The telecom giant is letting communications service providers know that the future is copper, still. Now, though, it’s a future with better data speeds and capacity, capable of broadband speeds of 100 Mbps and beyond.

Alcatel-Lucent fourth quarter 2010 earnings

Alcatel-Lucent delivering on its 3-year transformation journey

Further strong market & company improvement expected in 2011

Key numbers for the year 2010

  • Revenues of Euro 15.996 billion, up 5.5% year-over-year
  • Adjusted gross profit of Euro 5.572 billion or 34.8% of revenues
  • Adjusted operating income of Euro 288 million or 1.8% of revenues
  • Operating cash flow of Euro 851 million
  • Net (debt)/cash of Euro 377 million as of December 31, 2010

Interference:

While high-speed technology can theoretically reach transfer rates exceeding 100Mb/s, it has not been broadly implemented in the access network because of the limitations of copper as a medium. Copper suffers from electromagnetic interference both from ambient environmental factors and from the signals transmitted over the other wires bundled in a shared cable. This interference dramatically reduces the signal quality and the practical distance that a faster signal can travel.

Research Reports

The analysis of some technical research reports are discussed below.

  • SAN JOSE, Calif. — For making possible the global phone network — and with it 976 numbers, home-delivered Chinese food and “Larry King Live” — the world owes copper wire a debt of gratitude. Copper wire, though inexpensive and ubiquitous, has had a klutzy reputation as a relatively crude transmission medium. It was expected to give way to snazzy, speedy and expensive fiber optics for the next communications revolution — that of high-speed data exchanges between computers.
  • “Use of copper in last mile communication cannot be completely removed in telecommunication but as technologies such as GEPON, FTTH gain popularity fiber would gain far bigger share as preferred medium in last mile connectivity,” concluded Tamhane.
  • Mylaraiah JN, country technical manager at Tyco Electronics once said about the price of copper and fiber cables that:

Market Analysis of Copper Cables

Shielded Twisted Pair (STP) Cables Projected to Have the Highest Growth of All Copper Cables Used in Structured Cabling Systems (SCS) // 12.05.2011

HUMMELSTOWN, PENNSYLVANIA — FTM Consulting, Inc announced that its latest study, “U.S. Structured Cabling System Copper Cabled Forecast”, examines and forecasts the three major types of copper cables: UTP, STP and Coax. Frank Murawski, President, said, “STP cables are expected to have the highest growth, at 26.4%, over the next five years. The total copper cable market for SCS is forecast to grow from $4 billion in 2011, at a 20.8% rate, to more than $10 billion by 2016. Most of this growth is driven by existing installations upgrading from early Cat 5 UTP cabling plus the need for copper cable in new networking applications, such as VOIP or data centers. The pie chart shows the distribution of the three different copper cable types for 2011 and 2016.”

The study provides quantitative data on the following:

  • How large will the shielded cable market be compared to UTP cables?
  • Cat 6 versus Cat 6a — what will be the larger market in the future?

Five-year forecasts to 2016 include:

  • UTP and shielded cables by plenum & non-plenum — by categories;
  • Coax cable by plenum & non plenum & cable types;
  • Future technology outlook — examines copper cable beyond 100 Gbps; 400 & 1000 Gbps.

UTP cables are expected to continue to dominate the market, with a 92.6% share in 2011, increasing to a 95.6% share in 2016. STP cables, even with their high growth rate, are expected to capture a minor share, from 2.2% in 2011 to a 2.8% share by 2016. This includes the anticipated TIA standard for Cat 7 and Cat 7a STP cables. STP cables are viewed as a niche market product for those smaller installations needing higher bandwidth than is available with

UTP cables, but due to financial considerations are reluctant to upgrade to fiber cabling. Coax cable is projected to decline, as the primary cabling application of security video camera networks evolves from coax cable to high-performance UTP cable, which is capable of supporting the video signals.

Detailed forecasts for copper cable can be found in this study, as well as copper cable usage beyond 100 Gbps, using multiple lanes of 10 Gbps or 25 Gbps UTP cables to support 400 and 1000 Gbps over limited distances in the future.

Copper Infrastructure

The copper infrastructure not needs to be changed while upgrading the technology. However, there are alternatives to fiber being developed for commercial use within the next year or so. Most of the world’s existing wire-line access infrastructure is still copper-based. By exploiting that copper infrastructure, network operators might soon turn to these developing technologies to provide residential customers with the bandwidth they need. The alternative, fiber optic, is only available to about 20 percent of U.S. businesses, according to Carl Grivner, president of XO Communication, while the existing copper infrastructure is available nationwide.

  •  “Advances in copper technology deliver speeds many times faster and at lower cost than ever envisioned during the early 2000s when fiber was considered the only mechanism for broadband access,” said Grivner in a letter published Monday in The Hill, a political newspaper.
  • “Companies like mine deploy Ethernet over Copper, delivering speeds up to 45 Mbps where we have access to this vital–and existing–infrastructure.

FPGA Design

FPGA design and programming

To define the behavior of the FPGA, the user provides a hardware description language (HDL) or a schematic design. The HDL form is more suited to work with large structures because it’s possible to just specify them numerically rather than having to draw every piece by hand. However, schematic entry can allow for easier visualisation of a design.

Then, using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fitted to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company’s proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification methodologies. Once the design and validation process is complete, the binary file generated (also using the FPGA company’s proprietary software) is used to (re)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.

The most common HDLs are VHDL and Verilog, although in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative language. National Instrument’s LabVIEW graphical programming language (sometimes referred to as “G”) has an FPGA add-in module available to target and program FPGA hardware.

To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called IP cores, and are available from FPGA vendors and third-party IP suppliers (rarely free, and typically released under proprietary licenses). Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources.

In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.

 

FPGA Applications

Applications in High Performance Computing

  • FPGAs are increasingly used in conventional high performance computing applications where computational kernels (Fast Fourier transform, convolution etc) are performed on the FPGA instead of a microprocessor.
  • FPGA implementation of these kernels offer order of magnitude performance improvements over microprocessors.
  • Other benefits are in terms of power used: an FPGA implementation of FFT or convolution is expected to consume lesser power than a microprocessor.
  • Low-power usage is due to the lower clock rate and literally no wasted cycles for instruction fetch/decode in FPGAs.
  • The inherent parallelism of the logic resources on an FPGA allows for considerable computational throughput even at a low MHz clock rates.
  • The flexibility of the FPGA allows for even higher performance by trading off precision and range in the number format for an increased number of parallel arithmetic units.
  • For example, a floating point adder takes too many FPGA resources (LUTs and Flip-Flops) as compared to a fixed point adder.
  • However latest Xilinx Virtex-6 FPGAs may have as much as 2048 DSP blocks, allowing hundreds of floating point adders/multipliers.
  • Example applications: An AES encryption circuit implemented on a Xilinx Virtex5 FPGA running at 100MHz may be 10 times faster than a highly optimized AES encryption running on a latest CPU. Similar performance improvements (order of magnitude or more) may be obtained for other computationally intensive applications like N-Body simulation, image processing and manipulation, image registration etc.
  • The adoption of FPGAs in high performance computing is currently limited by the complexity of FPGA design compared to conventional software and the turn-around times of current design tools.
  • Place and route for a complex design may take a day to succeed.
  • FPGAs especially find applications in any area or algorithm that can make use of the massive parallelism offered by their architecture.
  • One such area is code breaking, in particular brute-force attack, of cryptographic algorithms.

Applications of FPGA

Applications

  • Applications of FPGAs include digital signal processing, software-defined radio, aerospace and defense systems, ASIC prototyping, medical imaging, computer vision, speech recognition, cryptography, bioinformatics, computer hardware emulation, radio astronomy, metal detection and a growing range of other areas.
  • FPGAs originally began as competitors to CPLDs and competed in a similar space, that of glue logic for PCBs.
  • As their size, capabilities, and speed increased, they began to take over larger and larger functions to the state where some are now marketed as full systems on chips (SoC).
  • Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of DSPs began to incorporate FPGAs instead.
  • Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small.
  • For these low-volume applications, the premium that companies pay in hardware costs per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC for a low-volume application.
  • Today, new cost and performance dynamics have broadened the range of viable applications.

 

What is Verilog?

In the semiconductor and electronic design industry, Verilog is a hardware description language (HDL) used to model electronic systems. Verilog HDL, not to be confused with VHDL (a competing language), is most commonly used in the design, verification, and implementation of digital logic chips at the register transfer level (RTL) of abstraction. It is also used in the verification of analog and mixed-signal circuits.

Overview

Hardware description languages such as Verilog differ from software programming languages because they include ways of describing the propagation of time and signal dependencies (sensitivity). There are two assignment operators, a blocking assignment (=), and a non-blocking (<=) assignment. The non-blocking assignment allows designers to describe a state-machine update without needing to declare and use temporary storage variables (in any general programming language we need to define some temporary storage spaces for the operands to be operated on subsequently; those are temporary storage variables). Since these concepts are part of Verilog’s language semantics, designers could quickly write descriptions of large circuits, in a relatively compact and concise form. At the time of Verilog’s introduction (1984), Verilog represented a tremendous productivity improvement for circuit designers who were already using graphical schematic capture software and specially-written software programs to document and simulate electronic circuits.

The designers of Verilog wanted a language with syntax similar to the C programming language, which was already widely used in engineering software development. Verilog is case-sensitive, has a basic preprocessor (though less sophisticated than that of ANSI C/C++), and equivalent control flow keywords (if/else, for, while, case, etc.), and compatible operator precedence. Syntactic differences include variable declaration (Verilog requires bit-widths on net/reg types, demarcation of procedural blocks (begin/end instead of curly braces {}), and many other minor differences.

A Verilog design consists of a hierarchy of modules. Modules encapsulate design hierarchy, and communicate with other modules through a set of declared input, output, and bidirectional ports. Internally, a module can contain any combination of the following: net/variable declarations (wire, reg, integer, etc.), concurrent and sequential statement blocks, and instances of other modules (sub-hierarchies). Sequential statements are placed inside a begin/end block and executed in sequential order within the block. But the blocks themselves are executed concurrently, qualifying Verilog as a dataflow language.

Verilog’s concept of ‘wire’ consists of both signal values (4-state: “1, 0, floating, undefined”), and strengths (strong, weak, etc.) This system allows abstract modeling of shared signal-lines, where multiple sources drive a common net. When a wire has multiple drivers, the wire’s (readable) value is resolved by a function of the source drivers and their strengths.

A subset of statements in the Verilog language are synthesizable. Verilog modules that conform to a synthesizable coding-style, known as RTL (register transfer level), can be physically realized by synthesis software. Synthesis-software algorithmically transforms the (abstract) Verilog source into a netlist, a logically-equivalent description consisting only of elementary logic primitives (AND, OR, NOT, flipflops, etc.) that are available in a specific FPGA or VLSI technology. Further manipulations to the netlist ultimately lead to a circuit fabrication blueprint (such as a photo mask set for an ASIC, or a bitstream file for an FPGA).

History

Beginning

Verilog was invented by Phil Moorby and Prabhu Goel during the winter of 1983/1984 at Automated Integrated Design Systems (renamed to Gateway Design Automation in 1985) as a hardware modeling language. Gateway Design Automation was purchased by Cadence Design Systems in 1990. Cadence now has full proprietary rights to Gateway’s Verilog and the Verilog-XL simulator logic simulators. Originally, Verilog was intended to describe and allow simulation; only afterwards was support for synthesis added.

Verilog-95

With the increasing success of VHDL at the time, Cadence decided to make the language available for open standardization. Cadence transferred Verilog into the public domain under the Open Verilog International (OVI) (now known as Accellera) organization. Verilog was later submitted to IEEE and became IEEE Standard 1364-1995, commonly referred to as Verilog-95.

In the same time frame Cadence initiated the creation of Verilog-A to put standards support behind its analog simulator Spectre. Verilog-A was never intended to be a standalone language and is a subset of Verilog-AMS which encompassed Verilog-95.

Verilog 2001

Extensions to Verilog-95 were submitted back to IEEE to cover the deficiencies that users had found in the original Verilog standard. These extensions became IEEE Standard 1364-2001 known as Verilog-2001.

Verilog-2001 is a significant upgrade from Verilog-95. First, it adds explicit support for (2’s complement) signed nets and variables. Previously, code authors had to perform signed-operations using awkward bit-level manipulations (for example, the carry-out bit of a simple 8-bit addition required an explicit description of the boolean-algebra to determine its correct value). The same function under Verilog-2001 can be more succinctly described by one of the built-in operators: +, -, /, *, >>>. A generate/endgenerate construct (similar to VHDL’s generate/endgenerate) allows Verilog-2001 to control instance and statement instantiation through normal decision-operators (case/if/else). Using generate/endgenerate, Verilog-2001 can instantiate an array of instances, with control over the connectivity of the individual instances. File I/O has been improved by several new system-tasks. And finally, a few syntax additions were introduced to improve code-readability (e.g. always @*, named-parameter override, C-style function/task/module header declaration).

Verilog-2001 is the dominant flavor of Verilog supported by the majority of commercial EDA software packages.

Verilog 2005

Not to be confused with SystemVerilog, Verilog 2005 (IEEE Standard 1364-2005) consists of minor corrections, spec clarifications, and a few new language features (such as the uwire keyword).

A separate part of the Verilog standard, Verilog-AMS, attempts to integrate analog and mixed signal modeling with traditional Verilog.

SystemVerilog

SystemVerilog is a superset of Verilog-2005, with many new features and capabilities to aid design-verification and design-modeling. As of 2009, the SystemVerilog and Verilog language standards were merged into SystemVerilog 2009 (IEEE Standard 1800-2009).

The advent of hardware verification language  such as OpenVera, and Verisity’s elanguage encouraged the development of Superlog by Co-Design Automation Inc. Co-Design Automation Inc was later purchased by Synopsys. The foundations of Superlog and Vera were donated to Accellera, which later became the IEEE standard P1800-2005: SystemVerilog.

Verilog

Verilog is a HARDWARE DESCRIPTION LANGUAGE (HDL). A hardware description language is a language used to describe a digital system: for example, a network switch, a microprocessor or a memory or a simple flip-flop. This just means that, by using a HDL, one can describe any (digital) hardware at any level.

Design Styles

Verilog, like any other hardware description language, permits a design in either Bottom-up or Top-down methodology.

 Bottom-Up Design

The traditional method of electronic design is bottom-up. Each design is performed at the gate-level using the standard gates (refer to the Digital Section for more details). With the increasing complexity of new designs this approach is nearly impossible to maintain. New systems consist of ASIC or microprocessors with a complexity of thousands of transistors. These traditional bottom-up designs have to give way to new structural, hierarchical design methods. Without these new practices it would be impossible to handle the new complexity.

Top-Down Design

The desired design-style of all designers is the top-down one. A real top-down design allows early testing, easy change of different technologies, a structured system design and offers many other advantages. But it is very difficult to follow a pure top-down design. Due to this fact most designs are a mix of both methods, implementing some key elements of both design styles.