Page 22

EETE JULAUG 2014

Xilinx’ SDNet: where software defined networks truly begin By Julien Happich Even when designed into large volume applications, FPGAs are increasingly competing with ASICs and ASSPs, mainly for easy bug fixing during development, then to update hardware functions on-the-fly once deployed in an application. The typical trade-off is mostly end-product cost versus design flexibility. Not only FPGAs are getting more cost-competitive with every new node, Xilinx is now promoting them as the only viable solution for what the company calls “Softly” Defined Networks, a step in flexibility that goes far beyond Software Defined Networks (SDN) as they are conceived today. In contrast to traditional SDN architectures, which employ fixed data plane hardware with a narrow southbound API connection to the control plane. Softly Defined Networks are based upon a programmable data plane with content-intelligence and a rich southbound API control plane connection. By using FPGA fabric instead of ASICs or ASSPs for the data plane, Softly Defined Networks are flexible on both software and hardware counts, while enabling reconfigurable contentaware data pre-processing at wire speed. For network OEMs, carriers and multiple systems operators (MSOs), this translates into new services that can be provisioned on a per-flow basis, with in-service network upgrades while operating at 100 percent line rate. Key to these “Softly” Defined Networks is Xilinx’ Software Defined Specification Environment, SDNet. Combined with the company’s All Programmable FPGAs, SDNet allows system architects to specify and deploy exact application services without requiring an understanding of the underlying device architecture or a complex programming language – see figure 3. Compared to ASIC or ASSPbased line cards, the “softly” defined line card envisaged by Xilinx would drastically reduce OpEx (no need to go and physically change hardware) while extending the reach of CapEx with line cards staying longer into the network infrastructure. The FPGA vendor has already shipped its SDNet framework to tier-one customers who are currently evaluating new packet processing and data forwarding scenarios that the added flexibility gives them. “Network OEMs are still working to understand what will be the actual savings made over CapEx and OpEx so it is difficult to put hard numbers” said Gilles Garcia, Director of Marketing & Corporate Global Account Manager for Wired Communications at Xilinx, “but they can expect a 10 fold increase in productivity to develop a new line card” he added. “There is a fierce competition between server players, and when they are using ASSPs, only software is the differentiator at the control plane”, explained Garcia. “For future networks, we need to transition from 1Gbit to 10 or even 100Gbit line cards, but none of the server farms natively support these speeds, so there is a need for accelerators, for packet processing, for data pre-processing to optimize network integration with specific algorithms, multiple SSD controllers” Garcia added. Those driving this transition to software defined networks are mostly Telco providers and data centres. Garcia gave us some examples of new services enabled by softly defined networks. These could include specific packet searches and datafiltering, say to provision premium services for Netflix video-ondemand services with a set Quality of Service (QoS). In the future, this flexible approach to designing reconfigurable networks could enable smarter network boxes, maybe designed with more cache capacity so more decisions could be made on the data at the line card level rather than the data having to go all the way to a data centre for processing. This could play in favour of so-called Fog Computing, where data generated by IoT devices and gateways at the edges of the network could be pre-processed and re-routed according to a set of application specific rules, all configured using the same SDNet software interface. Other trends facilitated by SDNet include server virtualization and storage virtualization whereby a bulk of storage capacity made up of pure memory racks could be used across a data centre at several levels of cache, linked to pure computing racks. This is where you would need very high speed switches to route the data between storage pools and processing pools, transparently for the end-user. Looking at its ultimate line card implementation, one could conclude that Xilinx aims at displacing ASICs out of the net 22 Electronic Engineering Times Europe July/August 2014 www.electronics-eetimes.com


EETE JULAUG 2014
To see the actual publication please follow the link above