2010年3月24日 星期三

Study of Infiniband Technology

初見 Infiniband 時, 感覺有點像當年看到 ATM 的 spec 一樣。 "這應該會成為未來的主流技術, 並且汰換掉乙太網路", 不過 ATM 最後還是沒能成為主流, 目前只有在 WAN 的應用部份為主。 不過其實 Infiniband 的訴求和 ATM 並不一樣。
新的技術要在規格上汰換掉現有的技術, 看起來好像並不是那麼的難。 其實真正難的部份是如何吸引更多的 Vendor 進來參予, 更多的優點來吸引消費者採用, 希望能看到這個技術發光發熱。
這是我初見 Infiniband 的感想。

InfiniBand is a switched fabric communications link primarily used in high-performance computing. Its features include quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices.

InfiniBand forms a superset of the Virtual Interface Architecture.

 

Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. It supports several signalling rates and, as with PCI Express, links can be bonded together for additional bandwidth.

Signaling rate

image

The serial connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection. InfiniBand supports double (DDR) and quad data rate (QDR) speeds, for 5 Gbit/s or 10 Gbit/s respectively, at the same data-clock rate.

Links use 8B/10B encoding — every 10 bits sent carry 8bits of data — making the useful data transmission rate four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s respectively.

Implementers can aggregate links in units of 4 or 12, called 4X or 12X. A quad-rate 12X link therefore carries 120 Gbit/s raw, or 96 Gbit/s of useful data. As of 2009[update] most systems use either a 4X 10 Gbit/s (SDR), 20 Gbit/s (DDR) or 40 Gbit/s (QDR) connection. Larger systems with 12x links are typically used for cluster and supercomputer interconnects and for inter-switch connections.

Latency

The single data rate switch chips have a latency of 200 nanoseconds, and DDR switch chips have a latency of 140 nanoseconds.The end-to-end latency range ranges from 1.07 microseconds MPI latency (Mellanox ConnectX HCAs) to 1.29 microseconds MPI latency (Qlogic InfiniPath HTX HCAs) to 2.6 microseconds (Mellanox InfiniHost III HCAs).[citation needed] As of 2009[update] various InfiniBand host channel adapters (HCA) exist in the market, each with different latency and bandwidth characteristics. InfiniBand also provides RDMA capabilities for low CPU overhead. The latency for RDMA operations is less than 1 microsecond (Mellanox ConnectX HCAs).

Topology

InfiniBand uses a switched fabric topology, as opposed to a hierarchical switched network like Ethernet.

As in the channel model used in most mainframe computers, all transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service.

Messages

InfiniBand transmits data in packets of up to 4 kB that are taken together to form a message. A message can be:

 

Reference:

http://en.wikipedia.org/wiki/InfiniBand

http://www.infinibandta.org/index.php

沒有留言: