Intel has kicked off its Innovation Event 2023 today, a two-day event, and it looks like most of the focus is going to be on Artificial Intelligence (AI), which is sort of expected given that the technology does have a lot of potential.
Among other stuff, the company has demoed ChatGPT on the cloud vs. a 7 billion parameter model running on a single Xeon system. The system in question is powered by a single 5th Gen Xeon (codenamed “Emerald Rapids”) processor. While we are not sure, the 7 billion parameter model Intel refers to here might be the Falcon LLM.
While GPT, among other LLMs (large language models), are very useful and also fun to play around with, they can be very demanding on the hardware side of things as well as the general requirements. For example, recent research suggested that ChatGPT would “drink” around about half a liter of water for every 20 or so prompts. Financially, a report from earlier in the year suggested ChatGPT could cost nearly three-quarters of a million or $700,000 per day. Naturally, hardware vendors like Intel, AMD, and Nvidia are seeing the opportunity here which is why they are designing next-gen solutions with AI acceleration in mind.
Aside from the 5th Gen Xeon demo, Intel also teased some of the performance we can expect from the next-gen 6th Gen “Granite Rapids”. The company claims a 2-3x improvement partly thanks to the boost it will get from the memory subsystem upgrade. Intel will be going 12-channel with support for up to DDR5-8800 MCR DIMMs on 6th Gen Xeon compared to 8-channel DDR5-8000 on the 5th Gen. The former is scheduled for a 2024 release while 5th Gen is already sampling to customers.