High Performance / High Throughput Computing

The terms HPC/HTC refer to numerous different architectures all aimed to one goal – achieve more “computing” in less time. The most common architecture for HPC uses a number of compute server (usually named “compute nodes”) with a software solution for job distribution across the compute nodes. There are two main categories of compute jobs: “Task Parallel” and “Data Parallel”. Task parallel jobs simply distribute tasks across compute nodes. Each task uses the computing resources of a single compute node. The actual distribution of compute tasks is usually done using a Distributed Resource Management (DRM) software. Data parallel jobs allow a large data model to be processed by many compute nodes at once. This usually requires the software or the programmer to develop the code using a specific data parallel toolkit. There are several kinds of Data Parallel toolkits (not named that way) each solves different kind of Data Parallel problems and data sizes ranging from Gigabytes to Petabytes.

Some HPC/HTC problems are better suited to the latency or bandwidth of the Networking and/or Storage subsystems. For instance, a Data Parallel task which requires a large amount of inter-node communication is usually sensitive to network latency. Tasks that process very large model based on a shared distributed storage system are sensitive to storage bandwidth. Some workloads are also sensitive to storage latency.

Network Latency and Bandwidth

For workloads where “standard” networking technologies are not enough, a high bandwidth / low latency networking technology may be required. The key for really low latency is always proper use of Remote Direct Memory Access (RDMA). The basic idea is to take the server processor our of the data transfer process. As data rates grow, RDMA becomes increasingly important for high performance networking. There are numerous technologies but three of those are standard based with more than one manufacturer.

10Gb Ethernet (and 40Gb Ethernet)

Faster Ethernet with standard TCP/IP is fast enough for many workloads. It is very common and competitively priced. RDMA exists but is not very common with 10GbE and 40GbE.

Infiniband

While Ethernet recently approached the bandwidth of Infiniband, 40GbE is very rare on the compute node and RDMA based protocol are more common and more mature on Infiniband. Infiniband is also very competitively priced for its high bandwidth and low latency. At the compute node level, the mot common data rate is QDR providing about 40Gb/s (32Gb/s of useful data) with 100 nanosecond latency. A x12 EDR may reach 300Gb/s of useful data.

PCI-Express 3.0

PCI-Express was always a switched network protocol in its core. PCIe-3.0 adds optical connectivity to the standard enabling fast, standard based way to connect remote devices which required networking in the past.

High performance storage

There are many solutions for high performance storage providing either very high throughput reaching many GB/s or very low latencies going down to 20-100 microseconds. With storage it is more difficult to have both, especially when there is a need for a shared pool of data as is common for HPC/HTC applications. High bandwidth technologies includes high density storage systems and shared distributed file systems. Consistent low latency usually requires memory-based storage systems (either RAM-based, Flash-based or combinations).

x
סייען נגישות
הגדלת גופן
הקטנת גופן
גופן קריא
גווני אפור
גווני מונוכרום
איפוס צבעים
הקטנת תצוגה
הגדלת תצוגה
איפוס תצוגה

אתר מונגש

אנו רואים חשיבות עליונה בהנגשת אתר האינטרנט שלנו לאנשים עם מוגבלויות, וכך לאפשר לכלל האוכלוסיה להשתמש באתרנו בקלות ובנוחות. באתר זה בוצעו מגוון פעולות להנגשת האתר, הכוללות בין השאר התקנת רכיב נגישות ייעודי.

סייגי נגישות

למרות מאמצנו להנגיש את כלל הדפים באתר באופן מלא, יתכן ויתגלו חלקים באתר שאינם נגישים. במידה ואינם מסוגלים לגלוש באתר באופן אופטימלי, אנה צרו איתנו קשר

רכיב נגישות

באתר זה הותקן רכיב נגישות מתקדם, מבית all internet - בניית אתרים.רכיב זה מסייע בהנגשת האתר עבור אנשים בעלי מוגבלויות.