HPI-DC'05 Panel: The Future (of) RDMA RDMA technology is becoming more wide spread and available in network interconnects. The low latencies achievable through RDMA are very desirable. For adoption in high-end parallel computing, issues related to buffer management, scalability, and standards need to be considered. From each panelist we seek their opinions and ideas on the following questions: 1. Buffer management Receiver-side buffer management has shown to be successful and appropriate for high-performance computing. I.e. the receiving application specifies where it wants incoming data to land. At the same time zero-copy, OS-bypass, and independent progress of the application are important too. How does RDMA fit into this world? Specifically, how can MPI get all the benefits offered by RDMA and achieve zero-copy, OS-bypass, and independent progress? Is this possible now, in light of receives that may not have been posted yet, or applications doing long computations without calling the MPI library? Would changes in RDMA technology or APIs make this integration easier and more efficient? 2. Scalability With systems of 10,000 nodes and more, scalability becomes an important issue. The number and size of buffers needed on a node for RDMA should not grow linearly with the number of nodes in the system. Similarly, the amount of state, for RDMA keys for example, should also not grow linearly with the number of nodes in the system. RDMA technologies and APIs designed for clusters of a few hundred nodes may not scale to tens and hundreds of thousands of nodes. What can be done to make RDMA a viable player in networks for these high-end machines? 3. Future of RDMA (standards) Where is RDMA going and how can it benefit HPC? Will there be standards that can be easily adapted by traditional programming models; e.g. MPI? Will the mass-market dictate standards and technology choices that are detrimental to HPC?