Brad Wardell's views about technology, politics, religion, world affairs, and all sorts of politically incorrect topics.
Why you run out of memory
Published on September 2, 2007 By Draginol In GalCiv Journals

32-bit gaming is going to come to an end.  Not today. Not tomorrow, but a lot sooner than most people think.

That's because no matter how much memory your PC has, no matter how much virtual memory you have, a given process on a 32-bit Windows machine only gets 2 gigabytes of memory (if the OS had been better designed, it would have been 4 gigs but that's another story).

Occasionally you run into people in the forums who say "I got an out of memory error".  And for months we couldn't figure it out.  We don't have any memory leaks that we know of and the people who reported it had plenty of virtual memory.  So what was the deal?

The problem was a basic misunderstanding on how memory in Windows is managed.  We (myself included) thought that each process in Windows may only get 2 gigabytes of memory but if it ran out of that memory, it would simply swap to the disk drive.  Thus, if a user had a large enough page file, no problem.  But that's not how it works.  After 2 gigabytes of memory, the system simply won't allocate the process any more memory. It simply fails and you will end up with a crashed game.

This is a very significant problem.  In Galactic Civilizations II v1.7, we'll at least be able to address this with more aggressive dealocation routines (which I really hate having to do, I really prefer the idea of once something is used, to keep it around for performance -- I've always been a proponent of performance over memory use).  But we'll be able to do it here without any noticeable affect in performance.

No, the real problem is in future games. If 2 gigabytes is the limit and a relatively low impact game like Galactic Civilizations is already running into it (and it's no memory hog), what's coming up next?  How about this -- when your video card runs out of memory for textures, it goes to system memory. And I think (but haven't tested this) that the memory it grabs belongs to the game process. 

Console game developers would simply laugh at our complaints and say that we just need to get better at de-allocating memory.  But that's only a short-term solution.  Gamers, particularly PC gamers, want to see their games get better looking and more sophisticated. 

So at some point, in the next few years, serious gamers and high-end games are going to have to require 64-bit machines to play them.  It'll probably be several years before it becomes common but it's coming. 

The good short-term news for GalCiv players is that we'll be able to have larger sized galaxies in GalCiv II: Twilight of the Arnor without running out of memory and users of GalCiv II will be able to avoid running out of memory once they get v1.7.


Comments (Page 2)
4 Pages1 2 3 4 
on Sep 03, 2007
64-bit has its own problems on the hardware side, though. How many people here have tried routing a 64-bit bus line on a microprocessor before? This isn't like the "lack of vision" problem Bill Gates had with the 640K memory thing. You pay a significant performance, price, and power penalty when you go 64-bit on the hardware. That's why things like the Intel Itanium were outrageously over-priced, YEARS behind schedule, and little market for them.


I tend to disagree. 64bit CPUs have been around for ages (since 1961), the technology has been well-tested. Lots of supercomputers and other big non-consumer hardware already uses 64bit computing. Only recently 64bit entered the consumer world.

Note that the Itanium still isn't consumer hardware, the Itanium has been designed for servers and high performance computing. The reason it failed isn't due to 64bit, it's due to its overall architecture:
"Itanium's architecture differs dramatically from the x86 and x86-64 architectures used in other Intel processors. The architecture is based on explicit instruction-level parallelism, with the compiler making the decisions about which instructions to execute in parallel."

And that's the problem: You need a special compiler and you need to write your code in a special way so that the compiler can easily autoparallelize it.

This gets even more difficult in other architectures, like the Cell as used in the PS3. Still, most of the console titles only use a fraction of the Cells capability, mainly because most console games are ported to every single console there is. And since these games are made by companies interested in profit and not high performance, they don't optimize there code to one platform.

Now back to 64bit in the consumer world: The current 64bit consumer level CPUs are a dream. Fully compatible with old 32bit code and a 64bit instruction set that's very similar to the 32bit one (which makes it relatively easy for compiler makers regarding optimization).

Greatly lifted memory limits, virtualization support, multiple cores, that are the key features of the new consumer level CPUs in my opinion. To my knowledge, the next version of Windows will no longer be available in 32bit mode, so in five years or so i think no new computers will ship with a 32bit only CPU. Skip forward a few years and CPU vendors will start to throw the legacy x96 stuff overboard.

It's a slow process, but be glad it's a smooth transition, nobody is forced to throw away all old software and start from scrath from one day to the other.
on Sep 03, 2007
How about this -- when your video card runs out of memory for textures, it goes to system memory. And I think (but haven't tested this) that the memory it grabs belongs to the game process.


My understanding is that however much texture memory you're using on the video card is automatically duplicated within the game's process already (either the game does it itself, or it's handled by DirectX). So for people (like myself) with 512MB graphics cards, if the game uses up 500MB (or so) of memory for textures, it's going to use up 500MB of that process' virtual address space to store a duplicate of everything stored in video RAM. That is, unless it's running DirectX 10, which apparently doesn't have to do that. Now if you run out of texture memory, it starts allocating system memory (via the chipset GART for AGP cards or the onboard GART for PCI Express cards).

Microsoft issued a patch for Vista just recently that addresses something related to this, though I don't understand the specifics, I just know that it cut down my overall memory usage as well as the memory used for my games.

I'm curious. Is GC2 large address-aware? Most games these days should be.
on Sep 03, 2007

As I said, there isn't a memory leak in GalCiv.  What happens is that through the course of the game, more and more ships, ship designs, etc. get created which uses more and more memory (especially on larger sized galaxies).

What can (and is being done) is to more aggressively deallocate memory for things that aren't on screen but there is a price to pay - performance.  The difference, hopefully, will be negligible.

People really need to quit assuming that because an application or game consumes memory through the course of running that it has some sort of "leak".  Because when people are so quick to yell "leak" it may distract developers from the report of a legitimate leak.

on Sep 03, 2007
In IC design we are drawing millions and millions of shapes, and we are staying comfortably below 1 Gig. of memory. The performance optimization we do is that if you zoom out, most of the smaller shapes don't need to be drawn. But they're still resident in memory. I have to wonder what exactly in Galciv is hogging the memory? I would expect the galaxy itself to be a sparse matrix--you just have an array of pointers, most of which are nil. And then the ships & planet textures I would expect to be templates. If you're flattening the data structure and storing the graphical information for each & every instance of the object, that would do it.

Maybe what Stardock is doing already is kind of similar to what we do: would you assume the user doesn't zoom in that much, and not store all that intricate data in RAM? Cause if you store ALL of the graphical data for a gigantic galaxy, as if the user was zoomed in for it all, that's a killer. But the user is not going to zoom in on an entire galaxy. But even then, I would expect the planet & ship textures to be stored in their own objects, not on every instance.


P.S. Contrary to what Intel and AMD marketing say, their microprocessor architectures are not truly 64-bit native. They'll fetch 64-bit assembly instructions and then decode them into 32-bit micro-ops (or uOps). It's emulation, the same way you would do it in the software. The Itanium and PowerPC64, by contrast, are native. If you compare the benchmarks of 32-bit software vs. 64-bit on PowerPC64, you will find the 32-bit applications are slightly slower (that's because you have 64-bit hardware emulating 32-bit). But Pentium 4 and Hammer architectures, it's the other way around: you take a hit running 64-bit software on their "64-bit machines". We are trying all sorts of tricks to reduce the performance penalty running 64-bit, but if companies sunk equal man-hours into 32-bit and 64-bit processor designs, the 32-bit will always outperform the 64. Maybe that's a sacrifice people are willing to make in exchange for the larger memory space, but really you're better off staying 32-bit for as long as you can.
on Sep 03, 2007

but really you're better off staying 32-bit for as long as you can.

But even if GalCiv fully optomized their code, and made it nice and neat, there is still the limitation problem that we are already running into in non-gaming apps for the simple reason that Microsoft does not optimize code period.  So running Office and IE can kill you in 2gb.  Or even 4 if the "bug" was eliminated.  It may not be time to go to 64 bit, but it is time to start planning to move to it in the near future.

on Sep 03, 2007
If IE is hitting a 2 Gig. limit, we are in a sad state indeed. That means you're transmitting millions of shapes over the net.

It would be nice if the industry would accept 48-bit instead of 64. That would mean a 50% area penalty on our chips instead of 100%. I can't imagine what application requires that the number of bits be a power of 2.
on Sep 03, 2007
Any word on memory fragmentation? (Actually fragmentation of the logical address space assigned to the application)

If plenty of space is still free, fragmentation won't (ever) be an issue. But if the memory limit gets reached, fragmentation can make things worse.
on Sep 07, 2007
I'm not a programmer, but I'm pretty sure that if you set a large memory aware flag (or something, don't remember its exact name) Windows x64 versions would be able to address 3 or 4 GiB of RAM to the game. This could be particularly useful for Vista, since some changes to how graphics memory is handled in Vista makes the process use even more virtual memory than in XP. There have been a few articles on anandtech dealing with this issue (Part 1, Part 2 and Part 3). If you're really going to be getting close to the limit on XP, people with modern graphics cards in Vista may get in trouble, but apparently this can be fixed, at least for the x64 versions of you set the appropriate flag.

I'm sure you know all this much better than I do, but just thought I'd mention it anyway

edit: I see Pyrion mentioned this earlier. As you can see in the anandtech articles a very recent game which is actually having crashes on large maps because of this issue (Supreme Commander) is not large address aware. The developers seem to have really dropped the ball on that one, as it should apparently not be very difficult to support this, and it would have saved a lot of players from a lot of crashes.
on Sep 16, 2007
What advice do I get out of this article?

Buy a new comp with Windows vista 64 bit and plug in 4 gb ram?

Or stick windows xp 32 bit and 2 gb of ram?
on Sep 16, 2007
What advice do I get out of this article?

users of GalCiv II will be able to avoid running out of memory once they get v1.7.
on Sep 16, 2007
Sorry kyro I meant in general not specific for galciv2, what would the advice then be considering near future game development
on Sep 16, 2007
Regarding memory fragmentation - this is something we are aware of and will be attempting to address in 1.7. There are various things that can be done, such as allocating in bulk and using the Low Fragmentation Heap, though ultimately there is no permanent solution except for but some form of garbage collection. Do not worry, the spirit of memory-pinching is still alive!

Large Address Aware only helps if the user has also enabled the USERVA option, so we will probably be providing instructions on that (for both XP and Vista). There is a tradeoff, your kernel isn't just wasting that virtual address space, and increasing the space for user mode may cause problems elsewhere - but not for most people.
on Sep 16, 2007
GreenReaper: Won't x64 versions of Windows be able to use more memory with "large address aware" without any further settings?

Also, can I take it from your comment that the large address aware flag is set (or is going to be set in the next expansion)? That would certainly put my mind at ease, since the new GFX memory management stuff in Vista coupled with 1 GiB graphics cards and BIG maps in GC2 could potentially be a problem (if I'm reading you guys correctly) if the flag isn't set.
on Sep 16, 2007
Regarding memory fragmentation - this is something we are aware of and will be attempting to address in 1.7. There are various things that can be done, such as allocating in bulk and using the Low Fragmentation Heap


Sounds promising.

though ultimately there is no permanent solution except for but some form of garbage collection.


I don't think garbage collection is the ultimate solution. Surely it helps to remedy this in most cases, however even with GC, there exist usage patterns which trigger some sort of fragmentation.
The only true fix (at the cost of some performance) would be to use double-pointers, i.e. the program doesn't hold a pointer to a (logical) memory location, but instead a pointer to a pointer to the memory location. That way a background thread can move things around. This is in some way similar to the MMU, with is done in hardware however.

In general however that's pure overkill though

On a sidenote, I amazed at the shortsightedness of the architecture designers. From 32 bit to 64 bit? History has shown us that all previous limits have been exceeded very quickly due to the exponential growth rate of technology development. Personally I would have opted for 128bit systems.

Do not worry, the spirit of memory-pinching is still alive!


Stop the build scripts! We cannot release it yet. There's still a critical patch to reduce memory consumption by two bytes! Thousands of gamers are awaiting the new version, but won't someone please think of the memory?
on Sep 16, 2007
People really need to quit assuming that because an application or game consumes memory through the course of running that it has some sort of "leak". Because when people are so quick to yell "leak" it may distract developers from the report of a legitimate leak.


on the rare occasion i've reported bugs i stick to a phenomenological approach: which is to say, i describe what i see and how it is a problem for me. i don't presume a thing about the underlying process; i know it's way over my head. it's unfortunate we have a culture that engenders people to want to sound like they know what they're talking about at all times.

as far as the 64 bit issue more broadly goes... it just depresses me, the whole subject. i love gaming, but i hate Windows, but i hate Apple more. it's not some fancy technological argument i have. since the original iMacs, Apple has designed its products (both hardware casing and GUI) like its consumers were kindergartners. but now Windows is going that way too, and i hate it. i miss the days of the DOS prompt. i'm just whining, i guess.
4 Pages1 2 3 4