Samsung has simply introduced its new journey into HBM expertise, with the introduction of HBM-PIM with the ‘PIM’ standing for processing-in-memory. However what the hell does that imply?
The brand new HBM-PIM expertise permits for some programmability inside the reminiscence layer, due to an embedded “DRAM-optimized” AI engine inside the reminiscence banks. This new AI engine is known as a PCU or Programmable Compute Unit, which can deal with the information between your CPU and reminiscence in a parallelized manner.
It’s because most computing methods of at present are primarily based on the von Neumann structure, which separates processor and reminiscence models to do thousands and thousands of particular person processing duties. This may be congested as you may think about, so HBM-PIM gives processing energy immediately saved on a DRAM-optimized AI engine inside of every reminiscence financial institution.
In testing of Samsung’s new HBM-PIM on its present Aquabolt HBM2 memory the corporate noticed efficiency doubling whereas energy consumption dropped over 70% which is spectacular for the early testing.
Kwangil Park, senior vice chairman of Reminiscence Product Planning at Samsung Electronics stated: “Our groundbreaking HBM-PIM is the trade’s first programmable PIM resolution tailor-made for various AI-driven workloads akin to HPC, coaching and inference. We plan to construct upon this breakthrough by additional collaborating with AI resolution suppliers for much more superior PIM-powered functions“.
Rick Stevens, Argonne’s Affiliate Laboratory Director for Computing, Surroundings and Life Sciences stated: “I am delighted to see that Samsung is addressing the reminiscence bandwidth/energy challenges for HPC and AI computing. HBM-PIM design has demonstrated spectacular efficiency and energy beneficial properties on vital courses of AI functions, so we look ahead to working collectively to judge its efficiency on extra issues of curiosity to Argonne Nationwide Laboratory“.