Home » Business » IBM unveils Telum processor made for AI tasks and fraud detection – Computer – News

IBM unveils Telum processor made for AI tasks and fraud detection – Computer – News

I don’t know 100%, but in the time (GTX680 time) that I was working with GPU compute, the core/speed was not the problem, but rather the amount of data you could get to it:

– The GPU cores each had absurdly little cache, so you couldn’t send complex data to them without an absurd amount of extra read/writes; or work over a slow bus
– The GPU cores were far away from the core CPU; CPU to PCIe was quite a few steps (cache miss) after which you needed even more steps to get to DRAM or even storage (witness this GPU: https://www.anandtech.com…fiji-with-m2-ssds-onboard — just an SSD on the GPU to skip a few steps; nowadays we have host controller function that allows a GPU on the same PCH to do this without the CPU itself; this is also what direct storage does)
– The bandwidth of frame buffer is significantly lower than cache even on the GPU

These AI accelerators seem to have a lot of cache, and have AI-accelerated prefetch themselves. I can’t find if they run software themselves (the term in accellerator, but stand-alone is also used), but they seem to be fairly stackable: with 22mld transistors versus the 28mld transistors of an RTX3090 (note, this Tulum has very a lot of cache because of a 3090; cache = many transistors) it is also a smaller chip: if it is all-in-one, you will be able to store a lot in a rack. The focus on memory-on-chip (so useful working set) + bandwidth + smart prefetch make this an interesting “chip”, especially when you see what tools (pytorch what they call it among others..) out-of-the-box be supported.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.