Around mid-February, the author of the CapFrameX software published the results of a test in the game Hogwarts Legacy. The Core i9-13900K should achieve dozens of percent higher FPS than the Ryzen 9 7950X, and the GeForce RTX 4090 should even achieve several times higher performance than the Radeon RX 7900XTX. This, of course, attracted media attention from a number of hardware-focused sites.
It could be overlooked that a number of them missed that the Core i9-13900K was not running in the factory configuration. However, some subsequently missed it when CapFrameX pointed out that the originally published numbers were wrong because something was not working correctly on its test system. The result remeasured on the new set, where the differences between the hardware were significantly smaller, were probably no longer of sufficient media interest to be worth reporting.
Now CapFrameX published the result according to which the combination of the Ryzen 9 7950X processor with the Radeon RX 7900XTX achieves a performance comparable to the Core i9-13900K combined with the GeForce RTX 4090, i.e. a significantly more expensive graphics card:
However, the result is not so surprising if we take into account the efforts AMD devoted to optimizations aimed at reducing the dependence of graphics card performance on the processor during the implementation and tuning of Smart Access Memory technology, and further, when it tried to further reduce the dependence of the GPU on the processor during the development of RDNA 3 performance.
However, the results now measured by CapFrameX require two fundamental additions. The first again lies in the fact that the processor settings were not factory. CapFrameX previously found that in Hogwarts Legacy on the Core i9-13900k, turning off the small cores resulted in a drop in power consumption and an increase in performance. So it was only tested on large cores. On the other hand, the Ryzen 9 7950X ran with all cores plus PBO (Precision Boost Overdrive) enabled. This feature boosts core clocks during multi-core workloads, which is not exactly a situation that most games would benefit from. Moreover, except for some boards that offer a superior implementation, PBO often increases consumption rather than performance, so I would consider its use in a gaming benchmark to be more or less unnecessary from a performance perspective, and more (rather than less) unnecessary from a consumption perspective.
The second fact is the fact that although the configuration built on AMD hardware achieves the performance of the significantly more expensive GeForce RTX 4090, it cannot be ignored that this result refers to one specific game and it would be inappropriate to generalize it in any way.