Sunday, February 21, 2010

Editorial: What took you so long?

It took three days, countless idiotic comments (some too obscene for us to approve), and more than a little patience, but finally somebody bothered to do what anybody with half a clue could have done all along. SirBruce, one of our new favorite readers, actually took the time to fire-up the only performance monitoring tool that matters (ironically, called “Performance Monitor”), and start logging the Committed Bytes counter.

What he found was that, as we tried to explain in various other posts here, Committed Bytes does not count cache and/or superfetch-related memory allocations. Rather, it parallels (though does not exactly follow – it’s still a separate counter) the “In Use” value from the Resource Monitor utility that everyone in the blogosphere keeps parroting. All of which is important because, as SirBruce noted in his comment, such a result just might prove we were right all along.

You see, just as SirBruce so elegantly demonstrated with “Perfmon,” we, too, query the Committed Bytes counter directly – in the case of our Tracker Agent,  via the Performance Data Helper (PDH) libraries. We see what “Perfmon” sees, and those data points reflect the fact that Committed Bytes is indeed accurately representing a fairly close approximation of the real physical memory use in the system. This is true regardless of whether superfetch is enabled or disabled – or even exists – for the system being monitored. They are two completely different entities, and as Mr. Kipling was known to say, “neither the twain shall meet.”

For example, on my own development system – a quad-core workstation with 8GB of RAM – the Committed Bytes counter in “Perfmon” typically hovers within a few hundred MB of what Resource Monitor is reporting as “In Use” memory. If I fire-up a bunch of memory intensive tasks (my personal favorite is VMware Workstation with a few 1-2GB VMs and the memory configuration set to keep everything in RAM), Committed Bytes will increase virtually in lock-step with the “In Use” value in RM.

Likewise, if I start closing VMs, Committed Bytes drops, again in virtual lock-step with “In Use.” And, if I really push the system, so that Committed Bytes/”In Use” memory is pegged at or near the 8GB mark (i.e. my PC’s total RAM size). the other critical “Perfmon” counters we record with our agent – Memory\Pages In/sec and Paging File\% Usage – start to climb rather quickly.

Which is why we factor all three of the above counters into our final Peak Memory Pressure Index calculations. Because when these three counters climb above the thresholds we’ve defined for the WCPI calculation process, it means that your PC really is running out of memory.

Folks, this isn’t rocket science. Anyone with any real experience monitoring Windows performance in the real world – and no, playing with Task Manager or Resource Monitor in your mom’s basement doesn’t qualify – knows we’re right. And now one of our fine readers has done the honor of vindicating us.

Bravo, SirBruce, for not sticking your head in the sand and actually bothering to think before feeding into the frenzy of idiocy that has taken over the blogosphere on this issue.

51 comments:

jonas said...

The source of their confusion is this:

It is common knowledge that this counter (Memory\Committed Bytes), more than any other, provides the most accurate picture of physical memory use under Windows

Actually, committed bytes is one of the worst possible metrics for physical memory usage. You can have a system that is completely out of physical memory, yet has plenty of available commit, and vice versa. As other people mentioned, Available Memory is a much better counter to use when you are interested in physical memory usage.

By the way, Tom's version of what Committed Bytes are is also not quite correct:

The Committed Bytes metric, for instance, is the amount of bytes in memory with stuff in it

Initially, when a page of virtual memory is committed there is no "stuff" in it. Physical storage for the page will be allocated only when it is accessed for the first time. Until then the page doesn't really exist anywhere - all you did by committing it is essentially tell the memory manager "I may decide to write some data to this page later; please make sure you'll have enough RAM or pagefile space to store this data in case I actually decide to write it."

This may seem like nitpicking but it's important to understand that when you see for example 1 GB of commit charge in task manager it doesn't mean there's 1 GB of stuff sitting in the RAM and/or pagefile. It means that if every committed page in the system was written to, you'd have 1 GB of data that would have to be backed by either physical memory or the pagefile.

Research Staff said...

@Mainuddin,

The problem with these naysayers is that our methodology is based both on the documented Microsoft descriptions of how this counter relates to memory exhaustion (just search on Committed Bytes at TechNet) and also our own experiences tracking this, and related metrics, in the field for over a decade.

We didn't just cook this stuff up yesterday. We've been observing these metrics for a long, long time, in some of the most demanding computing environments in the world.

We're correct to flag this counter as it meets or exceeds physical memory, because in almost every instance where it does, excessive VMM paging to/from disk is the net result (a fact that is exposed in the other counters we monitor along with Committed Bytes).

And nothing will ruin your day like a thrashing hard disk full of code and data that's been paged-out to make room because there just wasn't enough space available in RAM.

Unknown said...

Okay, I'm not trying to pick a side here. I was all ready to believe that their program was wrong because of superfetch. However, if they are actually using the Committed Memory counter, then perhaps they are right.


Right about what? Certainly not right about being low on memory.

Windows doesn't overcommit (unlike Linux), so every page allocated has to be charged against the total commit limit of the machine. The commit limit is the (instantaneous) maximum "memory" Windows has, it is the sum of usable physical memory and current pageiles. Even if a page is never accessed after being allocated, it still has to be accounted for, and so gets charged against the commit limit. That's what committed bytes is counting.

Let's imagine a program commits 1 GiB memory, but never touches it. The commit charge will increase by 1 GiB, as expected. But there won't be 1 GiB of pagefile I/O, nor will available memory drop by 1 GiB. This is actually really easy to test:

#include <windows.h>
#include <iostream>

int main()
{
void* memory = ::VirtualAlloc(0, 1024 * 1024 * 1024, MEM_COMMIT, PAGE_READWRITE);
char ch;
std::cin >> ch;
return 0;
}

Run that program and then take a look in Task Manager. You'll see that the program's Commit Size column is a little over 1 GiB. You'll notice that the Commit (MB) count will leap by the same amount. But you'll also notice that available memory has decreased only by a handful of MB, and further that the program has only had a few page faults (perhaps 500 or so).

Are we getting a clearer picture now? Committed bytes shows the amount of memory committed, but it doesn't give any indication of how much physical memory is being used (notice how available physical memory barely changes, even though we just made committed bytes shoot up by 1 GiB), nor even of how much pagefile I/O there is. Yes, Windows marks off an extra GiB of memory as used, because Windows doesn't overcommit, but it doesn't cause any performance impact.

If we change the program thus:
#include <windows.h>
#include <iostream>

int main()
{
void* memory = ::VirtualAlloc(0, 1024 * 1024 * 1024, MEM_COMMIT, PAGE_READWRITE);
for(int i = 0; i < 1024 * 1024 * 1024; i += 4096)
{
*((unsigned char*)memory + i) = 0;
}
char ch;
std::cin >> ch;
return 0;
}

Then we see something different. Committed bytes does the same thing. But this time around, available memory starts dropping off a cliff, and the process incurs a lot of extra page faults (about 260,000 of them, in fact). The physical memory usage graph in Task manager might then undergo a decline, as other things are pushed out to disk.

Committed bytes tells us virtually nothing about physical memory usage and availability. If the system has ample physical memory available (as indicated by the available bytes counter) then it's absurd to claim it is low on memory. I can allocated gigabytes of "committed bytes" without using any memory or causing any paging I/O. It is simply not an appropriate counter to use.

I was thrown a little by the graphs going up to 100%. My (committed bytes / commit limit) is nowhere near 100%. My (committed bytes / physical memory) is sometimes near 100%, and sometimes more than 100%. I presumed that they were not in fact measuring some proportion of comitted bytes, however, because the proportion just never made sense. The only way you get close to 100% is "committed bytes / physical memory", but that often goes above 100%, which the chart seemed not to do. Hence I assumed it was in fact "(total - free) / total"; that's the only way the 0-100% makes sense.

Unknown said...

And nothing will ruin your day like a thrashing hard disk full of code and data that's been paged-out to make room because there just wasn't enough space available in RAM.


But my disk isn't thrashing, in spite of your claims that I am out of memory!

A page that is committed but never read/written causes a page to be "used" in the pagefile, but occupies no physical memory and causes no pagefile I/O.

Your use of committed bytes fails to distinguish between committed bytes that are used, and committed bytes that are not used.

I do acknowledge that I was mistaken about SuperFetch. I assumed that cached files were also charged against Commited bytes (as they occupy physical memory, though never pagefile space), but it appears that this is not so.

My Pages Output/sec remains low, normally zero (the system is rarely having to spill to disk), and my Pages Input/sec is equally low, normally zero, spiking only when starting programs (as expected). Switching between tasks isn't causing Page Inputs meaning running processes aren't having to be spooled in from disk.

Research Staff said...

Dr. Pizza,

Which is why we also look at Pages In/sec and % Page File Util and factor those counters, as well as the duration of the event during which they exceeded the thresholds, as we generate the Peak Memory Pressure Index.

Again, we're not idiots here. We know that a high Committed Bytes value, absent significant paging activity, isn't cause for immediate alarm. But when all three metrics begin to climb in concert - and are accompanied by a similar climb in counters like Current Disk Queue Length and Disk Bytes/sec (i.e. two of the factors in our Peak I/O Contention Index), then something is definitely going on that's worth investigating.

And that's exactly what we're seeing on the vast majority of Windows 7 boxes we've been monitoring. Go look at the original post again. Notice how a similar percentage of systems that are experiencing memory pressure are also disk bound. The two indices go hand in hand to tell us that there's a problem here.

FWIW, we may tweak our weighting methodology in the future if/when we see the recorded combinations of the above metrics no longer make sense. Right now, Committed Bytes gets the lion's share of the weighting - 50% - since that has traditionally been the leading indicator in our experience. However, we may eventually decide that we're paying it too much attention and should up the weighting for Pages In/sec, % Page File Util or Event Duration.

Our methodology to date definitely skews towards "performance" over all other considerations. Perhaps we'll find that we've been overly aggressive and that a more lax set of thresholds/weighting combinations would more accurately reflect the patterns of more typical PCs. Again, our background is mostly with mission critical performance environments - that and lots of lab time at Intel's Desktop Architecture Labs (DAL) trying to bury their latest and greatest CPUs with ever more complex workloads (always fun).

But our collective "gut" tells us to err on the side of conservatism, and if that causes us to sound alarmist and to bring down the wrath of the Windows fan boy community, then so be it. :-)

quux said...

exo.blog said:

"And, if I really push the system, so that Committed Bytes/”In Use” memory is pegged at or near the 8GB mark (i.e. my PC’s total RAM size). the other critical “Perfmon” counters we record with our agent – Memory\Pages In/sec and Paging File\% Usage – start to climb rather quickly.

Which is why we factor all three of the above counters into our final Peak Memory Pressure Index calculations."


Perhaps if you were to fully explain your PMPI formula, there would be a lot more open-ness, a lot less confusion, and a lot less hostility in the discussion. I'm just saying.

It's so nice that you guys ignored my post on the 19th, asking for some correlation between committed bytes and page-ins - and nominated SirBruce as the only clueful user in the land.

quux said...

Research Staff said:

"and if that causes us to sound alarmist and to bring down the wrath of the Windows fan boy community, then so be it. :-)"

(Emphasis mine.)

Also, just a thought here: derogatory comments like this might be in line as a response to some of the feedback you have been getting, but by no means all of it. A number of people have discussed the issue forthrightly and in technical terms without stooping to such namecalling, without pejoratively pigeonholing all disagreements as you've done so far.

You're passing yourself off as the professionals here. You might try acting like it!

Randall C. Kennedy said...

@quux,

Sorry if we inadvertently rejected your comment. So much vitriol was sent our way in such a short period of time, we were bound to lose track. Feel free to re-submit whatever it was that you wanted to say and I'll be sure to moderate it into the comments section.

And as for your other comment, I'm stating for the record that we will pull no punches in our coverage. If we're besieged by legions of ill-informed zealots parroting equally ignorant "expert" bloggers (Peter Bright, I'm talking to you), we'll call it like we see it.

This is not a popularity contest. This is hard research of a kind and on a scale never attempted before. We simply do not have time to suffer fools who can't be bothered to do a simple keyword search on TechNet.

quux said...

Randall, you didn't lose my comment. It was published in the responses to the 'Rebutting Ars' story.

Anonymous said...

Your three metrics have serious problems:

Committed bytes - As everyone notes, this doesn't necessarily track actual memory usage

Pages In/sec - To quote Microsoft: "this is one of the most misunderstood measures. A high value for this counter does not necessarily imply that your performance bottleneck is shortage of RAM. The operating system uses the paging system for purposes other than swapping pages due to memory over commitment." How do you know that Windows 7 doesn't use the paging system more frequently for purposes other than swapping pages due to memory contention?

% Page File Util - In Windows 7, the page file is smaller by default. You don't seem to adjust for this in any way.

Unknown said...

Which is why we also look at Pages In/sec and % Page File Util and factor those counters, as well as the duration of the event during which they exceeded the thresholds, as we generate the Peak Memory Pressure Index.

Pages In/sec is irrelevant. You get pages in when you start a program, for example, because the executable code is demand faulted in. Or when a memory-mapped file is read. These aren't indications that the system is low on memory; they're indications that the system demand faults every mapped file (including executables).

Pages Out/sec is at least indicative of stuff being dumped to the pagefile. But until you know that that stuff is then faulted back in, it too is pretty irrelevant. Stuff being paged out but never paged back in isn't going to have much performance impact.

Unknown said...

Randall C. Kennedy said:

"This is hard research of a kind and on a scale never attempted before."

Seriously? Dude?

Randall C. Kennedy said...

@Slap,

Let's see...

* 23,517 registered users
* Over 230 Million System Metrics Records
* Over 13 Billion Process Metrics Records
* Online 24x7x365 all over the world

Yeah, I'd say we're serious...

Shub said...

Pages Out/sec is at least indicative of stuff being dumped to the pagefile. But until you know that that stuff is then faulted back in, it too is pretty irrelevant. Stuff being paged out but never paged back in isn't going to have much performance impact.

Even this metric is difficult to analyze programmatically. Windows will swap out long-running inactive processes and use that RAM for disk cache to benefit running processes. If you believe, as I do, that this is the correct decision then distinguishing between "bad hard faults" and "good hard faults" gets even trickier.

Unknown said...

The real fundamental question I have is this:

If Windows is saying that a large proportion of memory is still available--that is, near-instantly available to programs without having to hit the disk--then what do any of these other metrics that you are measuring matter?

If Windows is reporting immediately-usable physical memory, what else matters? In what sense can the system be said to be short on memory if there is ample usable physical memory?

We know that the committed bytes and page ins counters cannot be used for the purpose you are using them for. Available bytes can be, and yet available bytes is insistent that there is not a problem.

Randall C. Kennedy said...

@Dr. Pizza,

To quote Microsoft regarding Available Bytes:

"When RAM is in short supply (e.g. Committed Bytes is greater than installed RAM), the operating system will attempt to keep a certain fraction of installed RAM available for immediate use by copying virtual memory pages that are not in active use to the pagefile. For this reason, this counter will not go to zero and is not necessarily a good indication of whether your system is short of RAM."

So, your argument centers around Available Bytes? Really? When it's well known to be one of the *least* accurate measurements of Windows memory?

Your stock just dropped, my friend...

Unknown said...

We're not just talking a "certain fraction" being kept available here. We're talking hundreds of megabytes being consistently free.

And kept available without having to page stuff out, hence the extremely low page outs value.

Please explain to me how high available memory in conjunction with low pageouts means that I am low on memory.

Yes, I know that the OS will page stuff out if available memory is getting dangerously low. But it's not getting dangerously low. It's hundreds of megabytes, if not more. And the OS isn't paging stuff out, hence the extremely low page outs value.

If available memory were low, and page outs were high, I would quite agree with you--I would be short on memory. But it's not. Available memory is high, and page outs are low. Why do you keep skirting this point?

Unknown said...

Oh, and also, in spite of the documentation's "e.g.", merely having committed bytes greater than available RAM does nothing. It does not cause any paging or any change to available bytes. Again, trivially demonstrated by VirtualAlloc()ing a large chunk of memory and then never touching it. Committed bytes goes up, but no paging occurs and available memory remains the same.

The docs are, as ever, imprecise. The thing that triggers the paging out is not growth of committed bytes, it's a drop in available memory.

Randall C. Kennedy said...

Dr. Pizza,

I'm not skirting anything. I'm just trying to figure out what data set we're referring to. Are you talking about your own artificial test configuration? Or some set of metrics from the exo.repository? Have you actually installed our Tracker agent to see what it says about your box?

Because, on our end, we have multiple data points correlating with each other over many days (in excess of 10,000 unique counter samples per week) to paint a very different picture for most of our Windows 7 users than the one you're describing.

Or are you saying that a system with a Committed Bytes to Physical Memory ratio of 90-95%, a steady stream of paging activity at a rate of 300 or more pages per second, and pagefile growth well beyond the 50-60% mark, is not likely to be short on memory?

Unknown said...

I'm not skirting anything. I'm just trying to figure out what data set we're referring to.
My system during the time I ran the DMS software.

Are you talking about your own artificial test configuration?
It's the system I use all the time, hardly "artificial".

Or some set of metrics from the exo.repository? Have you actually installed our Tracker agent to see what it says about your box?
Given that you or a colleague of yours posted my system statistics in this very blog, I would have thought you could answer that question yourself.

Because, on our end, we have multiple data points correlating with each other over many days (in excess of 10,000 unique counter samples per week) to paint a very different picture for most of our Windows 7 users than the one you're describing.
I'm talking about my machine, which you described as low on memory, even though it is not!

Or are you saying that a system with a Committed Bytes to Physical Memory ratio of 90-95%,
This is not indicative of anything in particular. With its current pagefile, my system supports a ratio of fully 200%. What's special about 90-95%?

a steady stream of paging activity at a rate of 300 or more pages per second,
What do you mean by "paging activity"? "Paging activity" covers a wide range of things, including soft faults (which cause no I/O), paging out of executables (which causes no I/O), amongst other things.

Goodness, even reading from PDH itself causes page faults (take a look in Task Manager; reading from HKEY_PERFORMANCE_DATA/PDH causes page faults to be charged against the process, as even the "synthetic" registry hive is demand-paged).

If I average 300 or more page faults per second, but over the same period average zero page ins and zero page outs, why are you so sure that my system is low on memory?

and pagefile growth well beyond the 50-60% mark, is not likely to be short on memory?
Wouldn't you say that that would be rather influenced by the size of the pagefile? 50-60% usage of a 4 GiB pagefile is a very different matter than 50-60% usage of a 200 MB on, no?

Randall C. Kennedy said...

@Dr. Pizza,

BTW, I'm wondering why nobody has taken us on regarding our follow-up post about
Windows 7's superior CPU efficiency
vs. Windows Vista and XP.

Lots of juicy numbers to challenge there. What, no takers?

Randall C. Kennedy said...

@Dr. Pizza,

Peter! I should have known! We really should chat offline sometime - so much to talk about! Our number is on the web site...

But seriously, now that I know it's you, I'd like to ask your permission to post a screenshot of your system as it appears in the Report Card view of our commercial analysis portal. I ask because, unlike just quoting a few generic Committed Bytes percentages, this screenshot will show a partial process list from your box.

It's important that we bring this data into the discussion because that list shows the total working set values for the processes you ran on your box while Tracker was installed, as well as the private bytes values (i.e. pagefile bytes in perfmon) values of their non-shared code. And in your case, the working set peaks out at over 3.5GB (i.e. 87.5% of your physical RAM), while your private bytes peaks at just over 3.2GB (80% of RAM).

Let me know if you'd be OK with me posting that screenshot link...

Unknown said...

I'm sure you'll do what you like; you seemed to have few qualms about posting my details before, after all. Post it if you want, it doesn't change the fundamental issue that my system isn't thrashing.

But even 500 MB free memory is a lot. I think talking about percentages is frankly misleading.

I mean, let's just imagine, hypothetically, that I had a machine with 16 GiB of RAM. Even at 90% usage I would still have 1.6 GiB free (enough to run many games without so much as a single page out). Can such a system really be described as low on memory? I don't think so.

In essence, I think that with today's large memory systems (4-6 GiB is not uncommon) one can simultaneously have lots of free memory and high memory usage. I mean, in a way it's kind of obvious; that's the point of having lots of physical RAM in the first place. You can do all these things that demand lots of memory and still have plenty left over.

Anyway, I'm sure you'll post pictures of working sets, or whatever else, but again I ask:
since my system has high available bytes but low pageouts (so it is not pushing stuff out to disk in an attempt to maintain a few MB of available memory), how can it be described as short on memory?

Randall C. Kennedy said...

Peter,

I think I know why your box isn't thrashing: We never flagged your system as one of the machines in question.

In fact, you shouldn't have been flagged at all - I just assumed that you were since you posted a screenshot of System Monitor with the red bar. But when I run your system through our much more sophisticated Report Card template, you score a meager 69 on the Peak Memory Pressure Index scale.

Yes, you have a relatively high Committed Bytes value, but this is negated by your lack of significant paging activity - which is exactly how our template is designed to work, weeding out the real problem systems with multiple threshold violations from those that merely have a spike or high value in a single category. And since System Monitor and Report Card use the same underlying index calculation logic, it never occurred to me to doubt your screenshot.

Let me check something with System Monitor...

...yep, it's reading green on our end, just like the Report Card view. Can you double check it on your end? System Monitor does allow you to modify the threshold values (but not the weighting) of the various contributing factors. Perhaps you adjusted one inadvertently before displaying the Results pane? This would then lead you to believe we had somehow flagged your system (an assertion I accepted at face value from your screenshot - big mistake!) when in fact, according to the methodology we employed in our original post, you should have escaped scrutiny altogether.

FYI, we're using the "Normal" profile for all of our bulk calculations (i.e. the scripts we run to query the repository en masse), which should be the default setting for System Monitor. Take a look and get back to me, OK?

Thanks!

Anonymous said...

But seriously, now that I know it's you, I'd like to ask your permission to post a screenshot of your system as it appears in the Report Card view of our commercial analysis portal.

Ahaha, why ask for permission now when you've already posted information about his computer without his permission? It boggles the mind

Pavel Lebedinsky said...

Randall,

The KB article you quoted (http://support.microsoft.com/kb/555223) is not official Microsoft documentation. It's community generated content (check who the author is and the big disclaimer at the bottom).

As DrPizza said, committed bytes being larger than total RAM does *not* necessarily mean the system is low on memory or is paging. In many cases this situation is perfectly normal.

Unknown said...

In fact, you shouldn't have been flagged at all - I just assumed that you were since you posted a screenshot of System Monitor with the red bar. But when I run your system through our much more sophisticated Report Card template, you score a meager 69 on the Peak Memory Pressure Index scale.


I've since uninstalled your software, but I just used the default configuration.

All I did was install it, use my system normally whilst checking periodically to see if I had a graph available, then took a screenshot as soon as a graph popped up. Red and 100%, it said, in spite of plenty of available bytes and negligible page outs.

Unknown said...

Well let me just say thanks for recognizing my post and sorry for not responding sooner; I had forgotten all about my comment! Again, I'm not trying to pick sides, and my days doing IT are long past me, so I have no position on what's exactly the best way to evaluate free memory on a Windows system or the quality of your software. But at least I knocked down the myth that your measurement was grossly erroneous.

As for people who are coming up with edge cases where Committed Bytes isn't the best measure, I suspect such events rarely occur for the vast majority of Windows users. I'm sure Committed Bytes is a very useful rule of thumb, which is probably why it's reported in the Windows monitoring tools to start with.

Randall C. Kennedy said...

Peter,

You can still access the widget and review your data, which will remain in the archive for seven days before it's automatically purged. Here's the link to the widgets page:

Widgets Page

Since you already posted the screenshot on your own site, I figure posting a follow-up here shouldn't pose a problem for you, so here's a link to what I'm seeing on my end:

Peter's PC in System Monitor

Looks a little different than what you posted on your end. Methinks there was some sort of misconfiguration when you ran the widget and took your screenshot. Regardless, now I know why I had such a hard time figuring out why you had been flagged like that - it's because you hadn't! :-)

Anyway, thanks for your help - if it wasn't for you participating here at the exo.blog, I wouldn't have known to look deeper and the whole mystery of your PC's bogus reading would remained unresolved.

Brandon said...
This comment has been removed by the author.
Unknown said...

Hmmmm!

Click me!

The other day is mysteriously green. But today remains red. In spite of, well, the absurdity of such a claim. Gobs of memory available.

Misconfiguration indeed!

Unknown said...

As for people who are coming up with edge cases where Committed Bytes isn't the best measure, I suspect such events rarely occur for the vast majority of Windows users. I'm sure Committed Bytes is a very useful rule of thumb, which is probably why it's reported in the Windows monitoring tools to start with.

I don't agree. The situations in which committed bytes greatly disagree with memory available are commonplace and standard.

Committed bytes is a great measure of how much memory you have allocated that's backed by physical memory and pagefile.

It's not a great measure of anything else. Because it's not meant to be!

Unknown said...

Seems to me like you guys intentionally chose the most beneficial interpretation of a complex idea. Then again I don't understand why people would pay for what Windows does better for free.

Its almost as if you are just making a pretty ui for the windows performance monitor.

BlakeyRat said...

Does someone want to link to this miraculous post by SirBruce? It seems pretty daft for us all to be debating a report we haven't seen. (Unless I just missed the link?)

Erik said...

Epic troll, you got everyone to defend windows 7 memory usage for an entire week. Bravo. Oh sorry you got fired for being a liar. http://www.infoworld.com/d/adventures-in-it/unfortunate-ending-357

Yaos said...

Just thought you guys should know this guy is a liar! http://infoworld.com/d/adventures-in-it/unfortunate-ending-357

quux said...

There are suddenly much larger concerns about this information and its source:

Randall C. Kennedy fired and deleted from InfoWorld: http://www.infoworld.com/d/adventures-in-it/unfortunate-ending-357

ZDNet calling Kennedy and "Barth" (who apparently never really existed) credibility into question: http://blogs.zdnet.com/BTL/?p=31024

Unknown said...

While I think the methodological debate should continue, I want to take a different look at this issue. While XPNet has found 80% of Win7 PCs experiencing bottlenecking due to heavy use of RAM, many users have smooth experience with Win7 (OK, I don’t have a concrete number). Why is there a big discrepancy?
1. It may be the case that XPNet’s finding is relevant to only those mission-critical computing thus no practical impact for the vast majority of users.
2. It may be the case that Win7 is indeed "underperforming" but most end-users’ experience is “good-enough”. In other words, Win7 may potentially be faster with the given resource but most users are satisfied with the current performance.
3. Most users are just duped by Microsoft.
In any case, I think XPNet (and the Computerworld) could have been clearer about its interpretation of the result.

Brandon said...

I described how to properly measure physical memory usage in Windows 7 here:

http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/

brad77 said...

And since System Monitor and Report Card use the same underlying index calculation logic, it never occurred to me to doubt your screenshot.

Let me check something with System Monitor...

...yep, it's reading green on our end, just like the Report Card view. [...] This would then lead you to believe we had somehow flagged your system (an assertion I accepted at face value from your screenshot - big mistake!) when in fact, according to the methodology we employed in our original post, you should have escaped scrutiny altogether.


Wait, what? You judged based on his screen shot, not from the data you had collected? From the text of your "rebutting Ars" article, which you have removed:

And after reviewing this data, it became clear why our System Monitor widget flagged his system as being low on memory.

Did you or did you not review his data at the time? I'm confused.

Andy Babiec said...

I'm still waiting for a rebuttal to the screenshot DrPizza posted...

http://episteme.arstechnica.com/evefiles/photo_albums/2/9/9/299008743041/499008743041_A9CC4E129D0AF986B1A70A20513AC715.png

Unknown said...

I'm also waiting for the response to Dr. Pizza's screenshot.

Also, I still don't get why SirBruce is so praised in this post. His comment shows that his Windows 7 PC was using about 50% of system memory (excluding cache). This is nowhere close to "max out memory" or "running alarmingly low".

Unknown said...

Don,

The point is my post was to show that pages from SuperFetch are classified as "Standy" while actual "Avialable" memory in Windows 7 is quite low. However, Standy is *not* counted by the Committed Bytes counter.

The initial attack against the XPN software was that it was simply checking the "Available" number and that's why they thought Windows 7 machines were low on memory, because they didn't account for SuperFetch. But they say they are using the Committed Bytes counter, and I demonstrated that SuperFetch memory doesn't get counted as Committed Bytes, so whatever XPN's sins are, that was not one of them. And the reporter who assumed that was their sin didn't do enough research before speculating.

Unknown said...

Ok I have two systems here. One is a desktop (built in 2003) with 1.0 G of RAM (DDR-2700), and the other is a laptop (bought in 2007) with 1.5 G of RAM.

The desktop shows "735MB/2047" for Commit(MB) (there's nothing running except Windows 7 and a couple of tray apps).

The laptop shows "2275/2812" for commit (MB) (both of these are per Task Manager only). By your standards I should be out of memory on the laptop (and yes, Task Manager does say 90% used).

Now, for what I have open on the laptop. Active on my desktop:
Google Chrome with 7 tabs open, Microsoft Outlook 2007, Windows Live Messenger, Task Manager, Performance Monitor (Which shows 80% for committed memory).

Other applications that are open (but not visible on the desktop): TweetDeck (which has been open for 12 hours), Toshiba's ConfigFree App, Windows Home Server Connector, SpeedFan, WeathAlert, Microsoft Security Essentials, StopZilla, HotSpot Shield, RoboForm, Kleopatra, Spambayes, and OneNote 2007.

So, you're saying that Windows 7 is the memory hog? When on my other system it only is using 43% of memory? (The other system has the Lexmark Print controller and Windows Home Server Connector running only).

Seems to me that the large number of applications that I have open are more of a memory hog than Windows.

Of course, I'm on older 32-bit hardware, not that fancy new 64-bit with lots of memory available. So maybe Windows 7 is giving me a break. I'm not sure. But wait... There's more...

I just opened Sun VirtualBox running Windows XP with 128 MB assigned on the laptop. And I'm still typing this out. I haven't closed anything else.

So, given all of this information, at what point did you say that I am running out of memory? Because it sure doesn't seem like it on my end.

Have a great day:)
Patrick.

Unknown said...

And the reporter who assumed that was their sin didn't do enough research before speculating.

I agree, and for that I apologize. Nonetheless, the thrust of my argument is correct: the counter he is using is not sufficient to make the claims he is making. It is perverse to call a machine with more than a gigabyte of memory on the standby and free lists "low on memory".

Unknown said...

Thank you, SirBruce, for clarification. So XPNet knows about Superfetch and that was supposedly not counted as used memory when they claimed most Win7 PCs max out memory (despite they recommend us to read the classic on NT).

Well, the "mystery" continues and let's see if they respond to Dr. Pizza's screenshot.

Unknown said...

Still no explanation for my screenshot?

Randall C. Kennedy said...

Peter,

I'll look into it when I have time. Been a bit busy, what with the WSJ interview and all...plus, I've got to sleep *sometime*.

Don't worry, I've pulled your latest raw data and I'll be combing through it tomorrow at the latest...

RCK

Unknown said...

I'll look into it when I have time. Been a bit busy, what with the WSJ interview and all...plus, I've got to sleep *sometime*.

I'm sure that interview will go well when they check with your references over at Infoworld. The real question is, whose name is on the application? Randall or Craig?

On a serious note, I will be eager to see the rebuttal because jabs aside, I enjoy reading about technical arguments.

My side question about the application you make is this: why does your site claim that your application sends my information out to you via HTTPS, when in fact it does not? Don't you think that in doing so, you're setting yourself up for someone to intercept that un-secure data that is flowing into your system? What would stop someone from figuring a way to infect all of the people who have your client installed by creating some malware to respond back to your application?

Just some thoughts. Last thought: your word verification had me type 'apimp'. I had a giggle at that.

Anonymous said...

We're losing sight of MainuddinAhmad's point at the top of this comment page:

Regardless of what other measurements are being looked at, there is still no good reason to believe that Committed Bytes is useful for detecting cases of memory pressure. And many reasons for believing otherwise.

(The reasons offered so far for using "Committed bytes" amount to "everyone knows that" and "we have all this experience with our clients' machines running our software that looks at this counter.")

Let's say it again:

If I VirtualAlloc a block of v.m. for type MEM_COMMIT, that will show up as "committed." But if I then proceed to actually touch only a small part of that region, it will use only as much RAM as I actually access.. at most.

This is trivial to test. Call VirtualAlloc while watching the Committed Bytes and Available Bytes counters. The former will show the effect of the VirtualAlloc call... but there will be no effect on Available Bytes.

And, yes, devs very often will VirtualAlloc as much as they think they'll ever need, and then only use a part of it.

And on the other side of things, the v.a.s. used by file mappings is not included in Committed Bytes... but a portion of it will typically be paged in, i.e. using RAM.

If you must consider "RAM being used" then you really should be looking at "Available" instead.

About the only thing "Committed bytes" is good for is figuring out if your pagefile is large enough.

Brandon said...

@Jamie -

You are largely correct, however, file mappings CAN contribute to Commit in specific cases.

The whole point is, Commit doesn't measure usage. It measures what Windows has promised it can provide. Commit is only useful for Windows to track against its Commit Limit, which is the sum of physical memory and the current page file size. Windows cannot promise (or "commit" to) more than that at any given time, so it won't. But just promising that a range of virtual memory *can* be used does not mean it will be used, which is why Windows does no work at all to initialize or provide memory at those addresses until the first time they are accessed.

I explained all of this here:

http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/