White Paper Download
Oracle RAC over InfiniBand
The benefits of Oracle over InfiniBand and details about the current implementation of this solution
Date: 06 March 2007, 16:00
The performance and scalability of applications in today’s typical data center depend on an efficient communication facility.
• Increases throughput—Standard servers connect to the InfiniBand network from a PCI-X or PCI-E host channel adapter (HCA). The HCAs have two 10-Gbps ports, and each of those ports supports an aggregate throughput of 20 Gbps—a bandwidth improvement of 20x compared to a typical Gigabit Ethernet card.
• Eliminates CPU load—InfiniBand uses RDMA, which allows send and receive buffers to be passed directly to the application, bypassing the operating system kernel, eliminating CPUintensive memory copying operations, and leaving cycles free for other work. As a result, application performance is improved on existing database servers and application servers.
• Uses efficient protocols—By using the IPoIB stack on an InfiniBand attached server, a customer can achieve throughput of 200 Mbps or more between servers. In a clustered Oracle grid environment, this makes InfiniBand a compelling technology because it provides an easy-to-implement means of improving Cache Fusion performance.
An Oracle architect will be able to identify the expected performance improvements with benchmarks such as netperf, netio, and IOmeter. However, because most applications are customer specific, an evaluation of the specific environment to be deployed will likely be undertaken before taking the Oracle grids into production, to better understand the actual application environment scalability.
You must have an account to access this white paper. Please register below. If you already have an account, please login.