Friday, July 3, 2009

Nvidia Ion Platform: Atom gets GeForce

Intel's Atom CPU and the subsequent net-product phenomenon in the last year has been the fresh talk of the industry in an otherwise pretty regular world. It's not often we see a whole new product segment created and exploded - we've covered plenty of netbook and mini-ITX products based on this but while they have a fantastic little low power and cost effective CPU, they are ultimately let down by their pairing with an old, hot northbridge and feature minimised southbridge.

Intel's Atom now comes in a dual core variety and even though it lacks the all important out of order execution that is an essential component to any modern CPU, it's very inexpensive as a platform and that has made it extremely popular for basic applications.

The 945GC northbridge and IGP can be described as, at best, "basic", and the southbridge has only two SATA ports, lacks USB options and the whole thing doesn't even get a PCI-Express x16 slot - or even much in the way of the modern PCI-Express altogether to be honest. At $50 for the entire package of three though it has made extremely cheap platforms for all sorts of official products: netbooks, nettops and mini-ITX boards, but also more exotic home builds like smoothwalls, file servers, powerful routers, NAS boxes, games servers - basically anything SFF you can think of.

Nvidia is looking to capitalise on this popularity by offering its own Atom-capable "mGPU" which won't necessarily lower power consumption even though it's a single chipset rather than a pair, but it should offer a whole wealth of extra features and graphics power that'll be like comparing the Star Ship Enterprise to a Canoe.

Nvidia has dubbed this new pairing the "Ion" in an attempt to stick with the chemically inclined "Atom" naming from Intel. We suppose "molecule" doesn't have quite the same ring to it, even though "Ion" does leave us chemists thinking of something static or incomplete.

Nvidia Ion Platform: Atom gets GeForce Nvidia Ion Platform: Atom gets GeForce Graphics

The differences between the chipsets are pretty epic - the Nvidia GeForce 9400 mGPU has plenty of front side bus overhead for overclocking and tons of extra memory bandwidth for the graphics portion that includes more recent DirectX 10 support in addition to four times the shader capacity and several times the clock speed, not to mention the fact that the Nvidia part does hardware vertex shading whereas the Intel GMA 950 does not.

While mobile and SFF parts are limited in what they offer, potentially having six SATA on a mini-ITX and PCI-Express x16 is not unheard of and it opens up a bevy of other possibilities into the net-product range, not to mention enthusiasts who want to fashion cheap and innovative home builds like we mentioned above.

Nvidia has shown that it can squeeze seven USB 2.0, two eSATA, 7.1 channel surround sound audio, Gigabit Ethernet, VGA, dual-link DVI and HDMI (with HDCP) into the tiny pico-ITX form factor (600mL capacity)!

Our concerns lie in the fact that the Atom CPU is probably underpowered for such a solution, and outside of mini-ITX motherboards most manufacturers will probably only use a single 1GB DIMM in single channel, but at least it will be DDR2-800MHz rather than 533MHz.

This is not only to potentially keep its costs in line with the main Atom brand, but also because Microsoft will only license Windows XP to net-products that have a maximum of 1GB of memory included.

Nvidia wasn't able to give us a specific TDP of the GeForce 9400 chipset, however we have read claims of a 12W TDP for the notebook part, although whether this is the same we're as yet unsure. 12W will be ~10W less than what Intel currently offers from the Atom platform as a whole, and should potentially improve battery life in mobile products.

Nvidia Ion Platform: Atom gets GeForce But will we ever see one? Nvidia Ion Platform: Atom gets GeForce But will we ever see one?
Click to enlarge

While we appreciate enthusiasts will jump at the chance to explore greater options on a really affordable platform, and companies in a saturated netbook/nettop market will also be glad of the extra breathing room to explore new products. The PCI-Express x16, extra SATA and generally more of everything will see to that, Intel's Atom is made for very inexpensive and light usage products: email, internet and basic computing. Can the Atom CPU keep up with the demands of a more complex chipset? Or is this Nvidia's closest thing to an x86 mobile product it can get seeing as its ultra mobile Tegra product is ARM based?

Nvidia claims full HD decode and display from >10" displays, but is that even "netbook" anymore? Desktop Atom products, maybe, because they will be paired with bigger monitors but not mobile. Can it really decode a Blu-ray movie without dropping frames? Nvidia also highlights the advantages of CUDA - but again, do we think of net-products and want to do video encoding with it? Unlikely.

Nvidia Ion Platform: Atom gets GeForce But will we ever see one? Nvidia Ion Platform: Atom gets GeForce But will we ever see one?
Click to enlarge

The biggest hurdle Nvidia faces is Intel and the way it controls its products. For starters, at the time of writing and since launch manufacturers cannot buy the Atom CPU on its own - it can only be purchased with the chipset as well. This is because Intel wants to protect its low cost Celeron processors and low end P43 or G31 chipsets that afford a much greater feature set.

Like other companies, Intel also guarantees marketing funds for manufacturers if they make a certain list of products and included on this list and "an Atom product" has been included onto this from its launch, but whether that changes to Atom CPU only or Atom platform remains to be seen and we'll be sure to enquire in due course.

Without being able to purchase the CPU on its own, Nvidia's new MCP is basically a non-starter because it can't make a competitive product in a very price sensitive market. In many ways Intel's current insistence on the way it sells the whole platform could potentially render it in an anti-competitive position, since it could be argued that while Nvidia has a front side bus license, Intel is locking it out of the market.

That could well change though - Intel could see it as a way to sell more Atom CPUs should AMD get in on the ultra portable action (hint: it will) so it's not an unreasonable choice. We have contacted Intel to ask about its 2009 policy towards its Atom package and as soon as we get a reply, we'll let you know.

Final Thoughts

So in conclusion, Nvidia has created some much needed potential for a very restrictive, yet immensely popular platform. The GeForce 9400 MCP is a good part and we can't wait to test the actual product to check how viable the Ion platform is and how well it works with an Atom CPU.

Will Nvidia's Ion be price competitive and will Intel offer its Atom CPUs on their own? These two factors will determine whether an Ion dream comes reality, or if we simply forget about it by breakfast.

Saitek Cyborg Gaming Mouse

Manufacturer: Saitek
UK Price (as reviewed): £31.99 (inc. VAT)
US Price (as reviewed): $59.99 (inc. Delivery)

As far as inputs go, it’s hard to get much more fundamental than the mouse and, as far as gaming devices go, it’s hard to get much more important.

They proclaim the fact on the back of the boxes and in all the hype and press releases and we all roll our eyes and pretend not to be so stupid as to fall for these marketing ploys…but it’s true. A good mouse will last you for years and afford you a Zen like level of comfort that allows you to take your gaming to the next level.

And a bad mouse? Well, using a bad mouse is like playing a Counterstrike clan match on a Commodore 64 with a brick for a mouse and a soggy cake for a keyboard. And no hands.

Just what constitutes a good gaming mouse is something that’s pretty hard to define, and there are loads of companies out there which have tried to perfect the formula and create The Ultimate Peripheral.

Saitek Cyborg Gaming Mouse Saitek Cyborg Gaming Mouse

Some have come close to succeeding. Others have drowned under their own incompetence. Now, as Saitek takes another bash at making what they think the ultimate input should be like, we take a look at the Saitek Cyborg Gaming Mouse to see if it can measure up to the task…

Cyborg

The Saitek Cyborg is, as you can probably tell, a mouse that doesn’t want to mess around. It wants to make a statement. It wants to grab your attention. It wants to grab you by the cojones and slap you in the face with them because it doesn’t care what you think.

Which, to be honest, is just as well because when you cut through all the outer layers of flesh and bore down to the cold robotic truth of it all, the Saitek Cyborg is actually quite ugly. Like, really.

It’s red and it’s black and it’s angular and so painfully pseudo-futuristic that just looking at it makes me think that this is the type of thing we’ll be seeing as a joke in the PC version of Fallout 3.

Saitek Cyborg Gaming Mouse Saitek Cyborg Gaming Mouse

Even the name, Cyborg, is labouring under the impression that gamers love anything even remotely tech-sounding. True, it isn’t as silly sounding as something like the Boomslang or the Deathadder, but at least Razer has an on-going theme and mice that are actually pretty good despite the name. The Saitek Cyborg on the other hand still has to prove that it can handle itself in that regard.

In fact, when you really examine it, the Saitek Cyborg isn’t just ugly looking – it’s uncomfy looking. The thumb especially, with that jutting out overhang and flat under-pad.

Still, putting all that aside for the moment, the Saitek Cyborg does have some things going for it, especially in the feature orientated department. Literally everything on the Cyborg can be controlled, from the resistance of the scroll wheel to the very size of the mouse itself.

So, this is a mouse with more buttons than style, right? A typical example of a feature list dominating the product design, to the detriment of the aesthetic, yeah? Well, maybe, but that it’s necessarily a bad thing and just because the mouse is a little lacking in the looks department doesn’t mean that it’s a bad mouse in the end.

HP Pavilion dv2-1030ea 12in Ultraportable

Manufacturer: HP
UK Price (as reviewed): £599.00 (inc. VAT)
US Price (as reviewed): $699.99 (ex. Tax)

Netbooks have been one of the big crazes (in computing at least) over the past 20 months or so and they were the cause of a massive spike in laptop sales last year. It's amazing to think how far we've come in such a short space of time when you look back at the first generation of these dinky little devices.

Oh yes, we're talking about the iconic Eee PC 701, which made many compromises but it did hit a very low price point and, in the wake of its success, Asus continued to bang the netbook drum while many other manufacturers joined the party. Intel even developed a CPU specifically targeted at netbooks and Mobile Internet Devices, which meant it wasn't long before there were some particularly attractive netbooks on the market.

The Acer Aspire One was brilliant, as was the Samsung NC10, while Eee PCs such as the S101 certainly caught our eye. But they're all suffering from feature creep and with Nvidia's Ion platform just around the corner that feature creep is going to continue.

HP Pavilion dv2-1030ea HP Pavilion dv2-1030ea notebook

What this has proven is that what consumers really wanted was a cheap, portable and capable laptop all along. This is where AMD hopes its Yukon platform fits in.

AMD first talked about its new platform at the Consumer Electronics Show in January and the chip maker made some pretty bold claims at the time. These were naturally treated to some scepticism on our part and we came away feeling a little underwhelmed given the fact that AMD itself had said the next-generation of the platform wouldn't be all that far behind – but it's not here yet.

What we do have here though is HP's Pavilion dv2 ultra portable notebook – it's based on the Yukon platform which, in this instance, sports a single core Athlon Neo MV-40 processor (1.6GHz, 512KB L2 cache) and the AMD 690T/SB600 chipset, which features an ATI Mobility Radeon HD 3410 integrated graphics processor. There are a number of different processor options available, but the Athlon Neo MV-40 is HP's weapon of choice in the dv2.

HP Pavilion dv2-1030ea HP Pavilion dv2-1030ea notebook HP Pavilion dv2-1030ea HP Pavilion dv2-1030ea notebook
Click to enlarge

According to AMD, the target for Yukon are devices like the dv2 which are a cut above current netbooks, and it hopes Yukon will fit into form factors typically associated with high-end ultra portable notebooks like the MacBook Air, ThinkPad X301 and Vaio TZ series. However, unlike these high-end ultra portables, the Yukon-based notebooks promise to shed one of biggest turnoffs associated with ultra-portables – they'll be affordable.

Rather than just calling these lightweight, affordable laptops 'affordable ultra-portables' though, AMD felt the need to try and introduce a new class of device known as the 'ultra-thin notebook.' From what we can see, there aren't any hard fast rules to differentiate this from the typically expensive ultra-portable laptops apart from price, so we really don't understand the need for a new segment in an already oversaturated market.

Specification Summary:

  • AMD Athlon Neo MV-40 processor (1.6GHz, single core, 512KB L2 cache
  • AMD 690T/SB600 chipset with ATI Mobility Radeon HD 3410 graphics
  • 2GB 640MHz DDR2-SDRAM
  • Glossy 12.1-inch LED-backlit display (1,280 x 800 native resolution)
  • 320GB Western Digital Scorpio Blue 5,400rpm hard drive
  • Three USB 2.0 ports, two 3.5mm audio jacks (headphone and microphone), 10/100 Ethernet, HDMI and D-SUB
  • Integrated 802.11b/g wireless and Bluetooth 2.0
  • Integrated five-in-one media card reader (SD/MS/MS Pro/MMC/XD)
  • Stereo speakers, built-in webcam with microphone
  • Removable four-cell 2,900mAh Lithium-Ion battery
  • Windows Vista Home Premium Service Pack 1

Thursday, July 2, 2009

Radeon HD 4890 vs GeForce GTX 275

ATI Radeon HD 4890 1GB

Manufacturer: AMD
UK Price (as reviewed): Typical price £220 (inc. VAT)
US Price (as reviewed): Typical price $249.99 (ex. Tax)

Nvidia GeForce GTX 275 896MB

Manufacturer: Nvidia
UK Price (as reviewed): £220 (inc. VAT)
US Price (as reviewed): MSRP $249 (ex. Tax)

Introducing the new competition from ATI and Nvidia

It’s been a while since we’ve seen ATI and Nvidia scrap it out with two comparably priced cards launching on the same day, but this month they’re right back at it. The Radeon HD 4890 is best seen as an overclocked 1GB Radeon HD 4870 – the architecture is the same and it’s still built using a 55nm process, but ATI has slightly stretched the design to widen the internal copper interconnects and put a crucial few more atoms of silicon between the transistors. The stretched design reduces transistor power leakage and strengthens signal integrity, which in turn allows the chip to run at higher frequencies.

Stock clocked Radeon HD 4890s ship with a GPU clocked at 850MHz, which is 100MHz faster than the Radeon HD 4870's engine clock, and the Qimonda GDDR5 memory is now running at 975MHz (3.9GHz effective) rather than 900MHz (3.6GHz effective). ATI claims that the RV790 GPU can be easily overclocked to 1GHz and beyond (with watercooling), however we're dubious about how much the stock cooler can take.

Radeon HD 4890 vs GeForce GTX 275 ATI Radeon HD 4890 vs. Nvidia GeForce GTX 275

With the new GeForce GTX 275, Nvidia is sticking to its strategy of modifying its 55nm GT200 GPU to suit whichever price point it fancies hitting. The GTX 275 therefore has 240 stream processors and a 448-bit memory interface. This means that the GTX 275 is very similar to the GPUs used in the GeForce GTX 295, differing only in clock speeds.

The twin GPUs in the GTX 295 operate at 576MHz, compared to the GTX 275 which is clocked at 633MHz and likewise the stream processors of the GTX 295 GPUs run at 1,242MHz, rather than 1,404MHz in the GTX 275. Memory is still limited to GDDR3, and the GTX 275 has 896MB of it running at 1,134MHz (2,268MHz effective) rather than the 999MHz (1,998MHz effective) GDDR3 memory of the GTX 295. Why is the GTX 275 GPU faster than those of the GTX 295? Because it has a dedicated dual slot heatsink for just one GPU, it has two 6-pin power connectors all to itself and it has more PCB space for better power hardware underneath as well.

In terms of price, neither of these new cards will break the bank. Both cards cost roughly £220, give or take a few pounds. However, Palit has produced an own-design version of the GTX 275 for £199.99 inc. VAT on launch day. As we were given a reference card to test, all of the following data cannot be applied to this Palit card, and so any price/performance comparisons we make in this article will be made on the prices given for reference design GTX 275 cards.

To see how the two new GPUs stack up in the line-ups of the two graphics companies, we've knocked together the handy specs table above, and we've also got some details for a few partner cards over the next two pages. Unfortunately Nvidia was less "ready" than AMD at this launch, playing role of the reactionary party and only a reference card was available to us in time for the launch. In comparison, no less than five AMD partners got us Radeon HD 4890 cards for launch day, however one had an issue with its artwork (no, not a "wardrobe malfunction").

Nvidia GeForce GTX 275

Core Clock: 633MHz
Shader Clock: 1,404MHz
Memory Clock: 2,268MHz (effective)
Memory: 896MB GDDR3

You'd be easily mistaken in thinking Nvidia's GeForce GTX 275 was a GeForce GTX 285 without knowing the difference or seeing a sticker. Both use two 6-pin power adapters, both are the same length and both are very black. The only difference is what appears to be an optimised (cost reduced) PCB and two less DRAM chips, because of the smaller bus and total footprint.

Radeon HD 4890 vs GeForce GTX 275 ATI Radeon HD 4890 vs. Nvidia GeForce GTX 275 Radeon HD 4890 vs GeForce GTX 275 ATI Radeon HD 4890 vs. Nvidia GeForce GTX 275
Radeon HD 4890 vs GeForce GTX 275 ATI Radeon HD 4890 vs. Nvidia GeForce GTX 275 Radeon HD 4890 vs GeForce GTX 275 ATI Radeon HD 4890 vs. Nvidia GeForce GTX 275
Click to enlarge

The cooler has been changed to closer match the 9800 GTX GeForce GTS 250, as a single moulded piece of plastic, instead of the metal grill-shape on the outside edge. Like all current GT200 derivatives, sound over HDMI still requires an S/PDIF pass through cable near the power cables.

Early Look: Asus Maximus III Formula

Manufacturer: Asus

Asus' latest edition to its Republic of Gamer line of motherboards has popped its head above the clouds, in the form of the third revision of the Maximus. Having first launched on X38 with some dodgy mishmashed blue heatsinks, the revised Maximus II Formula on P45 was very much loved here at bit-tech.

The P55-based Maximus III Formula is the newest model in the Republic of Gamers line and follows on from the Maximus II Formula quite closely in design. As we can see from the pictures, the red and black heatsinks make a welcome return.

The southbridge gets a nice fat heatsink, even though the P55 needs next to no cooling (we've seen it running without any cooling at all), and the MOSFET heatsinks surrounding the CPU socket are particularly low profile, with a fat, flat heatpipe circling two sides of the CPU socket into a larger central heatsink where the northbridge on other boards would traditionally be. In this case though, nothing is cooled underneath it - we expect some backlit bling to illuminate the RoG logo like usual.

The heatpipe doesn't cover the top heatsink, although both heatsinks can be unscrewed and replaced at the discretion of the end user. We'd probably expect to see the whole lot go if people do want to replace it to be honest - the central heatsink will do little but get in the way for more elaborate cooling setups.

Early Look: Maximus III Formula Early Look: Asus Maximus III Formula Early Look: Maximus III Formula Early Look: Asus Maximus III Formula

The four DDR3 DIMMs for dual channel memory get three phases of power regulation, while the CPU has an odd number at 19; although we've yet to confirm either is a "real" phase count. Instead of a single fat Fujitsu capacitor like on a few previous RoG boards (Rampage Extreme for example), Asus has gone for many, many smaller capacitors to smooth out the current flow.

This is especially important considering the tax PCI-Express (including multi-GPU), a memory controller and four CPU cores could potentially have on the area. We can only anticipate what capacitors will be used, but we expect the usual 50k hr Fujitsu solid aluminium capped, like on previous RoG boards.

There's space for the Intel Braidwood NAND Flash socket under the memory slots - we expect the Maximus III Extreme based on P57 to have that, but its usefulness has yet to be determined. Since the two (P55 and P57) are socket compatible with only this feature differentiating them, it makes sense to have just one PCB design for each.

Early Look: Maximus III Formula Early Look: Asus Maximus III Formula Early Look: Maximus III Formula Early Look: Asus Maximus III Formula

Despite three PCI-Express slots, the two red ones are either a single x16 or dual x8 at PCIe 2.0 bandwidth and will be suitable for both SLI and CrossFire multi-GPU setups. With Intel continuing to be stingy on the P55, referred to as "ICH10.5R" by some motherboard manufacturers, with PCI-Express, the bottom slot is only an x4 and we expect will disable the two PCI-Express x1 slots above if used.

Of those PCI-Express x1 slots, the upper most one is both the most useful above the primary graphics slot, and also the most useless as it backs right into the central heatsink. In this respect, we see the central heatsink being removed - sod the heatpipe. Although, with that said, this slot will also be used by Asus' (supplied) add-in audio card, which we've no details on at all yet - we can only guess for similar support to the last: software X-Fi and an ADI chipset perhaps. Finally, two PCI slots still get squeezed between as well, making an impressive total of seven expansion slots.

See the above MemOK button? Well, Asus can't test every perceivable memory module kit out there and sometimes you can try a new kit six months down the line and it just won't boot. The MemOK function is Asus' proprietary way to force the board to try all sorts of different memory configurations to force a successful POST. It'll sit and cycle through for a while as it goes through a set of iterative steps, then finally should kick in with compatible settings - it's a welcome fail safe feature.

DDR3: Kingston and OCZ at 1333MHz

We had a first look at DDR3 performance back in May with an engineering sample of Corsair's DDR3 DHX memory. However, as some of the first DDR3 modules to run off the production, Corsair's modules were rated at a CAS-9 latency, which is pretty high in comparison to the respectively lower latency CAS-7 modules we have on test today.

All right, let's not start whining about "latencies going up" or "how is seven low??" because it doesn't work quite like that. DDR accessed two bits of data per clock cycle, DDR2 accessed four per clock and now DDR3 accesses eight.

In addition to the change of topography making access take longer as the signal goes through every bank, the extra time is only in response to having more data being sent per clock and frequencies being scaled.

The latency criticism is true to a certain extent because 1,333MHz equates to 0.75ns per clock, which makes 7 CAS clocks = 5.25ns, where as 800MHz equates to a longer 1.25ns but just three CAS clocks are 3.75ns. You have to drop the 1,333MHz CAS latencies down to CAS-5 to reach the same 3.75ns latency, but you've also got nearly twice as much available bandwidth at 11,000 versus 6,400MB/sec.

This means in a high throughput, longer latency scenario DDR3 should work better. It's then somewhat ironic that this kind of system was employed with the long pipeline of the old Pentium 4, instead, from experience the Core 2 architecture works better with short, sharp accesses in a low latency memory environment rather than needing a ton of bandwidth. The Core 2 has a ton of cache and far better pre-fetchers than anything on the P4s, which is meant to "hide" memory access latency, however these pre-fetchers aren't perfect.

Today, we've got modules from Kingston and OCZ on the test bench and without further ado, let's have a look at the modules...

Kingston HyperX KHX11000D3LLK2/2G

Manufacturer: Kingston
UK Price (as reviewed): £302.68 (inc. VAT)
US Price (as reviewed): $452 (ex. Tax)

The string of letters and numbers for Kingston's model name translates to a PC11,000 "low" latency kit of 2GB DDR3. It's strange, because even though the industry continues to use the PC-bandwidth numbers to represent module speed, fewer and fewer consumers and businesses use it because we're not bandwidth limited in this day and age. PC3-11,000 translates to a memory clock of 1,375MHz, rather than the official JEDEC speed of 1,333MHz. Kingston either feels that 10,666 doesn't quite have the same ring to it or it wanted to get a slight clock edge over its rivals.

DDR3: Kingston and OCZ at 1333MHz Kingston HyperX KHX11000D3LLK2/2G DDR3: Kingston and OCZ at 1333MHz Kingston HyperX KHX11000D3LLK2/2G

Kingston does have an ultra low (UL) latency pair of 1375MHz modules which are rated at an eye watering 5-7-5-15, instead of the 7-7-7-20 modules we have here, but what stops us just overvolting, overclocking these LL sticks and saving a ton of cash instead? In actual fact, during testing we managed to drop the timings down to 5-6-5-15-1T at 1.85V, fully stable, but it was only at 1,066MHz not 1,333MHz.

Kingston HyperX KHX11000D3LLK2/2G Details:

Kit: 2 x 240-pin DDR3 Double Sided DIMM
Module Size: 2GB Dual Channel Kit (2 x 1GB)
Module Code: Kingston HyperX KHX11000D3LLK2/2G
Rated Speed: 1,375MHz DDR3
Rated Timings: 7-7-7-20 (CAS-tRCD-tRP-tRAS)
Rated Voltage: 1.7V
Memory Chips: Elpida

DDR3: Kingston and OCZ at 1333MHz Kingston HyperX KHX11000D3LLK2/2G DDR3: Kingston and OCZ at 1333MHz Kingston HyperX KHX11000D3LLK2/2G

Kingston HyperXKingston uses the same style of heat-spreaders that it sells with the entire HyperX series, just simply changing the DDR2 to a DDR3. There's a new futuristic look and style of sleek heat-spreader shown on its website, but it's disappointingly not used on these modules.

Elpida DRAM chips are used on these modules, which seems like a common choice given many other companies use either this or Qimonda for 1,066MHz and 1,333MHz modules. The DDR3 Micro D9s seem to be the preferred choice for enthusiast modules like the Corsair Dominator and DHX ranges, so we can only assume that there might not be much headroom in these modules. However, low latency might be a more appropriate option for Core 2 CPUs like it is with DDR2 (since Intel is the only one supporting DDR3 until sometime next year).

Memory: Is more always better?

Have you ever wondered what all of the fuss is about when it comes to memory? Memory manufacturers are going out of their way to sell you what they consider to be the best memory on the market for gamers, while also trying to push you into spending more money on more memory because - according to the memory makers - 2GB of memory is now becoming the industry standard for gaming systems.

The question is, whether doubling your current 1GB configuration to 2GB will be beneficial to your gaming experience or not. There are several options to take, too. Should you keep your current 512MB DIMMs and add another two to give you a total of 2GB, accepting the drawbacks (if there are any) from using the 2T memory timing?

Or, should you attempt to sell your current modules and purchase a pair of 1GB DIMMs?

Alternatively, could you get away with just sticking with your existing setup?

There are so many options on the market today, and we're going to attempt to answer these questions over the next few pages, while deconstructing some of the confusion that surrounds memory timings.

What does all of the jargon mean?

In the current computer market, where comprehending the intricacies of the latest generation of videos cards requires a great deal of stamina, it's easy to be deceived by memory and its many names - it's actually one of the simplest components in your system, and understanding what's best is easy.

Fred's driving along the road in his car, he spots something in the road and has to stop in a hurry, how long does it take for him to stop? There are two important components to this problem. The first is the most obvious (assuming something about his brakes, tyres and road surface) his speed will determine how far he travels when he presses the brake pedal.

Memory: Is more always better? Introduction
The second, more subtle component, is his reaction time. How long it takes him to press that pedal. Actual experienced memory bandwidth is determined by two analogous factors; how quickly data can be transferred from the memory (memory speed), and how long it takes this transfer to start (memory latency).

Here's a quick primer in memory jargon:

Memory Speed:

The link between the CPU and memory is called the memory bus. Often, it runs at the same speed as the Front Side Bus (FSB), which regulates the communication between the CPU and lots of other system components. The newer Intel Pentium Processors seem to run better with the memory using an asynchrous 4:5 memory divider, meaning that the memory is running faster than the front side bus. The bus speeds are measured in MHz, or million clock cycles (CC's) per second.

Modern processors transmit 8 bits of data on every clock cycle, and all Athlon 64 and the older Socket 478 Pentium 4 CPUs run with a 200MHz memory bus. Newer Intel Pentium CPUs that use the LGA775 socket use either a 266MHz or 333MHz memory bus speed. The memory bus speed depends on whether they're an Extreme Edition or not - standard Pentium CPUs use a 266MHz memory bus, while Extreme Editions use a 333MHz bus.

If you multiply 400 (200 times 2 as Double Data Rate (DDR) memory runs at twice the clock speed) by 8, and you get a theoretical maximum figure of 3200Mbits/s transfer - hence the memory rating speed PC3200 found on the label of most new sticks of DDR memory. With the newer Pentium CPUs, you will see modules labelled with PC2-4200 (DDR2-533), PC2-5400 (DDR2-667) and modules up to PC2-8000 (DDR2-1000).

Memory Latency:

Addressing memory is much like reading from a large, multiple page spreadsheet. It doesn't matter how quickly you can read, before you can start you have to find the page the data you want is on (this is known as tRAS), work your way to the row and column the data's stored on (tRCD), when you've found the cell you want it takes some time before you start reading (CAS) and when you get to the end of a row you have to switch to the next, which takes time (tRP).

tRAS is the time required between the bank active command and the precharge command. Or in simpler terms, how long the module must wait before the next memory access can start. It doesn't have a great impact on performance, but it can impact system stability if set incorrectly. The optimal setting ultimately depends on your platform - the best thing to do is to run Memtest86 on your system with variable tRAS settings to find the fastest setting for your system.

The tRCD timing relates to the number of clock cycles taken between the issuing of the active command and the read/write command. In this time, the internal row signal settles enough for the charge sensor to amplify it. The lower this is set, the better - the optimal setting is either 2 or 3, depending on how capable your memory is. As with any other memory timing, setting this too low for your memory can cause in system instabilities.

CAS Latency is the delay, in clock cycles, between sending a READ command and the moment the first piece of data is available on the outputs. Setting CAS to 2.0 seems to be the holy grail with memory manufacturers, but the difference between tight timings and high memory bus speeds is an arguement that we hope to settle over the course of this article.

The tRP timing is the number of clock cycles taken between the issuing of a precharge command and the active command. It could also be described as the delay required between deactivating the current row and selecting the next row. In conjunction with the tRCD timing, which relates to the time taken between the issuing of the active command and the read/write command, the time required to switch banks (or rows) and then select the next cell for reading/writing or refreshing is a combination of the two timings.

The Command Rate timing is another timing that is important to maximum theoretical memory bandwidth. It's the time needed between when a chip is selected and when commands can be issued to the selected chip. Typically, these are either 1 or 2 clocks, depending on a number of factors including the number of memory modules installed, the number of banks and the quality of the modules you've purchased. The majority of memory available today is claimed to run at the faster 1T memory timing.

Memory latencies are normally quoted in the following format, CAS-tRP-rRCD-tRAS Command Rate, an example being 3.0-4-4-8 1T, with the numbers corresponding to the individual latencies quoted in clock cycles. Lower numbers are better, though in theory tRAS should be tRCD added to CAS Latency plus 2.


The differences between the 2x1GB and 4x512MB, and essentially the differences between using a 1T command rate versus the slower 2T timing, are relatively small in a selection of today's most popular games. However, there were instances where we found that we were able to play games with less frame rate lag and hitching as a result of having 2GB of memory using the 1T command rate timing.

However, the bottom line is that you will not see the same 10% performance increase in real games that we saw in SiSoft Sandra.

Memory timings are going to make the same subtle performance differences too, so it's a question of whether you can afford the faster modules that are capable of using tigher timings. The choice will ultimately depend on what memory you are currently using in your system and also how much you're willing to spend.

We'd recommend making the upgrade to 4x512MB over 2x512MB, even with the slight drawbacks we experienced in one of the four games we tested. That's because there are many other uses for your computer (aside from gaming) where you'll see the benefit of 2GB of memory. The general desktop experience is improved by 2GB of RAM, and I'm sure you'll never look back if you make the jump.

If you're willing to take a bit of a gamble (dependant on whether you'll be able to sell your current memory or not), we'd recommend swapping out your current memory for 2GB - it's just a case of whether you choose to buy the cheaper modules with looser timings, or whether you opt for memory capable of reasonably tight timings at DDR400. That'll ulimately come down to whether you're planning to overclock or not.