• EmmaGoldman [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    The big problems with wafer scale chips are that they are HUGE, insanely expensive, and use crazy amounts of power: not a problem for server farms. Cerebras’s chips cost a few million dollars and use something on the order of 20-25kW of power, but the advantage is that a single unit has roughly the same computing power as an entire conventional server room, while using far less energy and taking up only about 15 rack units to do so. Insane amount of space savings, under 10% the size of conventional servers.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    5 months ago

    This could actually work really well based on what we’ve seen with Apple M series chips. RISC instructions are fixed size, and you can read in a batch of instructions, then look for ones that have dependencies, and run the rest in parallel. So, if you have 1,600 cores, you could process up to 1,600 instructions at a time and in best case scenario you could execute them all in parallel. In practice, you’re likely going to be executing a portion of instructions in parallel, but you still get a massive speedup basically for free. It also turns out that this design is a lot more energy efficient, so you get longer battery life for mobile devices and less energy use for server farms.