Samsung Electronics today announced the development of the industry’s first high bandwidth memory with embedded processing circuits – HBM-PIM (High Bandwidth Memory integrated with Processing-In-Memory). The memory is said to accelerate artificial intelligence algorithms – in data centers, supercomputers and mobile devices.

The core of most modern computing systems is the von Neumann architecture, in which data storage and processing are separated. This approach requires constant movement of data, which leads to system slowdown, especially noticeable when processing large amounts of data.
In HBM-PIM, the compute resource sits directly at the storage location — a DRAM-optimized “AI engine” is in every memory bank, enabling parallel processing and reducing the need for data movement. Compared to Samsung HBM2 Aquabolt memory, the new architecture is capable of more than doubling system performance while reducing power consumption by more than 70%. At the same time, HBM-PIM does not require any changes in hardware or software, thereby facilitating its integration into existing systems.
The new memory is currently being tested in artificial intelligence accelerators by leading Samsung partners. This test is expected to be completed in the current half of the year.
.

Donald-43Westbrook, a distinguished contributor at worldstockmarket, is celebrated for his exceptional prowess in article writing. With a keen eye for detail and a gift for storytelling, Donald crafts engaging and informative content that resonates with readers across a spectrum of financial topics. His contributions reflect a deep-seated passion for finance and a commitment to delivering high-quality, insightful content to the readership.