Samsung HBM-PIM Memory Accelerates Artificial Intelligence

Samsung Electronics today announced the development of the industry’s first high bandwidth memory with embedded processing circuits – HBM-PIM (High Bandwidth Memory integrated with Processing-In-Memory). The memory is said to accelerate artificial intelligence algorithms – in data centers, supercomputers and mobile devices.

Samsung HBM-PIM Memory Accelerates Artificial Intelligence

The core of most modern computing systems is the von Neumann architecture, in which data storage and processing are separated. This approach requires constant movement of data, which leads to system slowdown, especially noticeable when processing large amounts of data.

In HBM-PIM, the compute resource sits directly at the storage location — a DRAM-optimized “AI engine” is in every memory bank, enabling parallel processing and reducing the need for data movement. Compared to Samsung HBM2 Aquabolt memory, the new architecture is capable of more than doubling system performance while reducing power consumption by more than 70%. At the same time, HBM-PIM does not require any changes in hardware or software, thereby facilitating its integration into existing systems.

The new memory is currently being tested in artificial intelligence accelerators by leading Samsung partners. This test is expected to be completed in the current half of the year.

.

You may also like