2010年3月24日 星期三

Study of Infiniband Technology

初見 Infiniband 時, 感覺有點像當年看到 ATM 的 spec 一樣。 "這應該會成為未來的主流技術, 並且汰換掉乙太網路", 不過 ATM 最後還是沒能成為主流, 目前只有在 WAN 的應用部份為主。 不過其實 Infiniband 的訴求和 ATM 並不一樣。
新的技術要在規格上汰換掉現有的技術, 看起來好像並不是那麼的難。 其實真正難的部份是如何吸引更多的 Vendor 進來參予, 更多的優點來吸引消費者採用, 希望能看到這個技術發光發熱。
這是我初見 Infiniband 的感想。

InfiniBand is a switched fabric communications link primarily used in high-performance computing. Its features include quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices.

InfiniBand forms a superset of the Virtual Interface Architecture.

 

Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. It supports several signalling rates and, as with PCI Express, links can be bonded together for additional bandwidth.

Signaling rate

image

The serial connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection. InfiniBand supports double (DDR) and quad data rate (QDR) speeds, for 5 Gbit/s or 10 Gbit/s respectively, at the same data-clock rate.

Links use 8B/10B encoding — every 10 bits sent carry 8bits of data — making the useful data transmission rate four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s respectively.

Implementers can aggregate links in units of 4 or 12, called 4X or 12X. A quad-rate 12X link therefore carries 120 Gbit/s raw, or 96 Gbit/s of useful data. As of 2009[update] most systems use either a 4X 10 Gbit/s (SDR), 20 Gbit/s (DDR) or 40 Gbit/s (QDR) connection. Larger systems with 12x links are typically used for cluster and supercomputer interconnects and for inter-switch connections.

Latency

The single data rate switch chips have a latency of 200 nanoseconds, and DDR switch chips have a latency of 140 nanoseconds.The end-to-end latency range ranges from 1.07 microseconds MPI latency (Mellanox ConnectX HCAs) to 1.29 microseconds MPI latency (Qlogic InfiniPath HTX HCAs) to 2.6 microseconds (Mellanox InfiniHost III HCAs).[citation needed] As of 2009[update] various InfiniBand host channel adapters (HCA) exist in the market, each with different latency and bandwidth characteristics. InfiniBand also provides RDMA capabilities for low CPU overhead. The latency for RDMA operations is less than 1 microsecond (Mellanox ConnectX HCAs).

Topology

InfiniBand uses a switched fabric topology, as opposed to a hierarchical switched network like Ethernet.

As in the channel model used in most mainframe computers, all transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service.

Messages

InfiniBand transmits data in packets of up to 4 kB that are taken together to form a message. A message can be:

 

Reference:

http://en.wikipedia.org/wiki/InfiniBand

http://www.infinibandta.org/index.php

2010年3月17日 星期三

Understanding Layer 2 Trunk Failover

Layer 2 trunk failover, also known as link-state tracking, is a feature that provides Layer 2 redundancy in the network when used with server NIC adapter teaming. When the server network adapters are configured in a primary or secondary relationship known as teaming, if the link is lost on the primary interface, connectivity is transparently switched to the secondary interface.

When you enable Layer 2 trunk failover on the switch, the link state of the internal downstream ports are bound to the link state of one or more of the external upstream ports. An internal downstream port is an interface that is connected to the server. An external upstream port is an interface that is connected to the external network. When you associate a set of downstream ports to a set of upstream ports, if all of the upstream ports become unavailable, trunk failover automatically puts all of the associated downstream ports in an error-disabled state. This causes the server primary interface to failover to the secondary interface.

When Layer 2 trunk failover is not enabled, if the upstream interfaces lose connectivity, (the external switch or router goes down, the cables are disconnected, or link is lost), the link state of the downstream interfaces remain unchanged. The server is not aware that external connectivity has been lost and does not failover to the secondary interface.

An interface can be an aggregation of ports (an EtherChannel), or a single physical port in access or trunk mode. Each downstream interface can be associated with one or more upstream interfaces. Upstream interfaces can be bundled together, and each downstream interface can be associated with a single group consisting of multiple upstream interfaces. These groups are referred to as link-state groups.

In a link-state group, the link states of the downstream interfaces are dependent on the link states of the upstream interfaces. If all of the upstream interfaces in a link-state group are in the link-down state, the associated downstream interfaces are forced into the link-down state. If any one of the upstream interfaces in the link-state group is in a link-up state, the associated downstream interfaces can change to or remain in the link-up state.

Figure 28-4 Typical Layer 2 Trunk Failover Configuration

image

In Figure 28-4, downstream interfaces 1, 3, and 5 are defined in link-state group 1 with upstream interfaces 19 and 20. Similarly, downstream interfaces 2, 4, and 6 are defined in link-state group 2 with upstream interfaces 21 and 22.

If link is lost on upstream interface 19, the link states of downstream interfaces 1, 3, and 5 do not change. If upstream interface 20 also loses link, downstream interfaces 1, 3 and 5 go into a link-down state. Downstream interfaces 2, 4, and 6 do not change states.

You can recover a downstream interface link-down condition by removing the failed downstream port from the link-state group. To recover multiple downstream interfaces, disable the link-state group.

Configuring Layer 2 Trunk Failover

These sections describe how to configure trunk failover ports:

Default Layer 2 Trunk Failover Configuration

Layer 2 Trunk Failover Configuration Guidelines

Configuring Layer 2 Trunk Failover

Default Layer 2 Trunk Failover Configuration

There are no link-state groups defined, and trunk failover is not enabled for any group.

Layer 2 Trunk Failover Configuration Guidelines

Follow these guidelines to avoid configuration problems:

Do not configure a cross-connect interface (gi0/23 or gi0/24) as a member of a link-state
group.

Do not configure an EtherChannel as a downstream interface.

Only interfaces gi0/1 through gi0/16 can be configured as downstream ports in a specific link-state group.

Only interfaces gi0/17 through gi0/24 can be configured as upstream ports in a specific link-state group.

An interface that is defined as an upstream interface cannot also be defined as a downstream interface in the same or a different link-state group. The reverse is also true.

An interface cannot be a member of more than one link-state group.

You can configure only two link-state groups per switch.

Configuring Layer 2 Trunk Failover

Beginning in privileged EXEC mode, follow these steps to configure a link-state group and to assign an interface to a group:

This example shows how to create a link-state group and configure the interfaces:

Switch# configure terminal 




Switch(config)# link state track 1




Switch(config)# interface range gigabitethernet0/21 - 22




Switch(config-if)# link state group 1 upstream




Switch(config-if)# interface gigabitethernet0/1 




Switch(config-if)# link state group 1 downstream




Switch(config-if)# interface gigabitethernet0/3 




Switch(config-if)# link state group 1 downstream




Switch(config-if)# interface gigabitethernet0/5 




Switch(config-if)# link state group 1 downstream




Switch(config-if)# end












Note If the interfaces are part of an EtherChannel, you must specify the port channel name as part of the link-state group, not the individual port members.






This example shows how to create a link-state group using ports in an EtherChannel:





Switch# configure terminal 




Switch(config)# link state track 1




Switch(config)# interface P01




Switch(config-if)# link state group 1 upstream




Switch(config-if-range)# interface range gigabitethernet0/1, gigabitethernet0/3, 
gigabitethernet0/5




Switch(config-if)# link state group 1 downstream




Switch(config-if)# end







To disable a link-state group, use the no link state track number global configuration command.





Displaying Layer 2 Trunk Failover Status



Use the show link state group command to display the link-state group information. Enter this command without keywords to display information about all link-state groups. Enter the group number to display information specific to the group. Enter the detail keyword to display detailed information about the group.





This is an example of output from the show link state group 1 command:





Switch> show link state group 1







Link State Group: 1      Status: Enabled, Up







This is an example of output from the show link state group detail command:





Switch> show link state group detail




Link State Group: 1      Status: Enabled, Up




Upstream Interfaces   : Po1(Up)




Downstream Interfaces : Gi0/3(Up) Gi0/4(Up)







Link State Group: 2      Status: Disabled, Down




Upstream Interfaces   :




Downstream Interfaces :







(Up):Interface up   (Dwn):Interface Down   (Dis):Interface disabled








 



這個功能可以用在 loop-free 的 topology 來避免因為 access-layer lost up-link 時所造成的 traffic black-holing。



Reference:



http://www.cisco.com/en/US/docs/switches/blades/3020/software/release/12.2_25_sef1/configuration/guide/swethchl.html#wp1346176

2010年3月3日 星期三

Using “vpn-filter” to restrict access from RA VPN or EZVPN client

 

By default, the VPN server(PIX/ASA) will allow all traffic that is tunneled to it to exit the tunnel. To restrict such traffic which going thru the tunnel, you can use “vpn-filter value acl command under group-policy.

Be careful that this acl restrict traffic not only on the direction from client to server but also from server to client.

 

See below diagram for detail. As acl applies on both direction, use specifically ip/subnets as src within acl. If the range is too “broad”, then unexpected traffic might also be filtered.

If you want to permit traffic from client to only Internal DNS server(172.16.1.1) and configured the acl for vpn-filter as below.

access-list 103 extended permit udp any host 172.16.1.1 eq 53

You will not only restrict traffic from client to DNS server with udp/53 but also the returned from server to client. Client will never get the response from server because vpn-filter also block it at the reverse direction.
image

Notes:
When I test this with MS-L2TP vpn client, the acl is not working as expected unless I re-established the vpn tunnel. In the worst case, even need to reboot the Client.

Reference:

 http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_configuration_example09186a00808c9a87.shtml