A recentarticle on the utility of multiple cores has been making therounds. Despite being largely a copy/paste of other articles andgraphics, with a smidge of editorial commentary, itis anxiously heralded by dual-core owners aspurchase justification in the face of progressingtechnology.
[As fair disclosure, let me say that I’m about topurchase a quad-core processor based system, and this article andits sources did absolutely nothing to dissuade me from thischoice]
The meat of the article (or rather the articles that arereferenced by the article — someone else did the dirty,arduous footwork work of benchmarking) is comprised of ashowdown between a 2.4Ghz quad-core and a 3.0Ghz dual-core, whichis reasonable given that they’re comparable in price[at writing the 3.0Ghz dual-core E6850 can be had for $384CDN, while the 2.4Ghz quad-core Q6600 is $319 CDN]. Given that manygames and applications are effectively single-threaded as a legacyof lowest-common denominator development, the faster clockspeed dual-core processor abstractly takes the lead in suchfundamentally synthetic benchmarks for the pricepoint.
Aside from the questionable “it’s good to have one extracore to allow you to kill bad processes” premise (what ifthose bad processes are multithreaded Do you just have to buybad-process-threads+1 cores Maybe set the affinity suchthat you’ve dedicated a core solely for the task manager In thereal world of modern schedulers, the only time you can’t getcontrol of the machine to kill a rogue process is because of someabsolutely atrocious elements of the implementation of Windows, anda scheduler that is effectively broken in the face of somesituations. Neither is necessarily improved by more cores), whatreally gets me about the whole exercise is how utterly synthetic itreally is, using contrived benchmarks instead of rationallyconsidering how people actually use their PCs, and where their realneed for more power comes from.
Firstly, it largely focuses on games benchmarks. Even if gamingperformance is pertinent to the reader, for themajority of users playing the majority of games, their videocard is far more of a bottleneck than their processor(even if their processor is a dated affair). I’m saying this as along time computer gamer — one that finds the stutteringframerate on even top of the line game consoles intolerable: unlessyou’ve turned every quality setting to low and you’re running at800x600, it’s doubtful that you’re going to evenmeasure, much less notice, a difference betweena modern 2.4Ghz core and a 3.0Ghz core. Indeed, thevery first benchmarkI looked at on the referenced articlesays exactly that: “For this test, we set Oblivion’s graphicalquality to “Medium” but with HDR lighting enabled and vsyncdisabled, at 800×600 resolution“. They did that to create ascenario where the differences are measurable.
So if you plan to game in a contrived way for the purposes ofdemonstrating CPU differences in benchmarks, then you’d better payattention to core speed.
In the real world of gaming, after you’ve adjusted the qualityand resolution settings to appropriate settings for your videocard, the primary slowdowns during gaming tend to come aboutbecause of external applications rudely stealing yourthread quanta: I’m about to toss the grenade into the bunker inBattlefield 2 when suddenly Windows Search has decided that this isa good time to rebuild its index corpus, for instance, so insteadit falls to the ground and I take out my entire squad(Seriously, Windows Search guys – when a full-screen DirectXgame is running, it probably isn’t a good time to decide that thePC is “idle”).
For moments like that, more cores make a huge difference.Dual-cores would be sufficient for that simple scenario, but whatif my PC is even more active, as it always is Perhaps theblog updater is running an update, I’m FTPing some files, adownload is happening, and I’m gaming.
Every core works towards the ultimate goal of eliminating thereal world problem of cycle theft from my hardcore gaming.
Presuming that you’ve passed a reasonable bar –long behind you when you’re talking about a 2.4Ghz Core 2 — morecores will realistically improve things for gamers enjoying theirvice in the real world. One day we might even have a worldwhere we don’t have to shut down services and trawl taskmanagerviolently killing processes before launching a game, fearfulthat it will disrupt our immersion.
My second problem with the article is that it doesn’t questionwhat people are really waiting for nowadays. Personally Isee almost no difference between virtually any mainstream PC forthe overwhelming majority of day to day operations (and this is asa developer) — most activities are so fast the difference isnegligible. I just switched laptops from a single-core 1.6GhzPentium M to one with a Core 2 Duo T7200 — a significantimprovement — and from a day to day perspective I’ve indeednotice that the new laptop has a better screen, a faster harddrive,and much better graphics, but the computational difference islargely unnoticed.
Until, humorously, I do something that is highlyparallelizable, such as encoding a video pulled in from the miniDVvideo camera. In that case the dual-core processor strides to amassive lead over its single core predecessor. If it werea quad-core, it would storm even further ahead, even with the lossof frequency.
For something that I’m actually waiting on, more cores= more goodness.
I would definitely choose the quad-core processor for thesoftware reality legacy that we have today, despite the manyapplications that in the singular fail to exploit thepossibility. My conviction is amplified by the tremendous stridesthat application developers are making toparallelize their products. Once you’ve parallelized to 2 cores,it’s generally a very small step to parallelize to 4cores, or n cores for that matter.
Bring on the cores!