Thursday, March 11, 2010

(Editorial): Incriminating Email Sinks InfoWorld

This is a post I hoped I wouldn’t have to write. After watching for over two weeks as InfoWorld misrepresented the particulars of my resignation as Enterprise Desktop blogger, I feel I’ve given the publication ample opportunity to come clean and admit they were complicit in the Craig Barth ruse.

Since it’s now clear that no such admission is forthcoming, I feel I have no choice but to release the following email exchange which demonstrates, without question, that Infoworld Executive Editor Galen Gruman knew of my ownership of the exo.blog; that he knew I was using it as source material for my InfoWorld blog; and that Gregg Keizer was regularly quoting from the exo.blog and its sole public “representative,” Craig Barth. Moreover, it shows that Galen was willing to assist me in promoting the very same exo.blog content, even going so far as offering to use InfoWorld staff writers to create news copy based on my numbers.

And lest he somehow try to escape scrutiny by passing all blame down to his subordinate, I’ve also included an email from InfoWorld Editor in Chief Eric Knorr (i.e. the guy who “fired” me with much sanctimonious aplomb) which shows that he, too, was clearly aware of my ownership of all sites, content and materials associated with the exo.peformance.network.

Note: Conspiracy theorists, please scroll to the bottom of this post for the prerequisite SMTP header disclosure.

But first, I need to set-up the timeline:

The following exchange took place roughly one week before the infamous ComputerWorld Windows 7 memory article. I was pushing Galen to let me run another story in InfoWorld that would reference the exo.blog data on browser market share, and he replied by pointing out that I had done something similar just a few weeks earlier, with Gregg Keizer quoting from my exo.blog. He also spoke of assigning staff writer Paul Krill to begin quoting from my exo data as part of a new regular news component for InfoWorld.

I responded that I had no problem with him having Paul Krill write up my usage data, but I requested that he approach it the same way that Gregg [Keizer] had in the article Galen had referenced earlier and thus not use my name but instead refer to xpnet.com as a separate entity. I close the paragraph by emphasizing how I was determined to curry favor with every beat writer I could find, including Gregg Keizer, and how there would be no real overlap between the data I publish in my blog (i.e. exo.blog) and what I did for InfoWorld.

A couple of items to note:

  1. At no time prior to the “scandal” breaking did I, Randall C. Kennedy, publish any research articles through the exo.blog. All articles were published through the generic “Research Staff” account and subsequently fronted to the media by the fictitious Craig Barth.
  2. Galen Gruman clearly acknowledges my ownership of both the exo.performance.network and the exo.blog, and notes how he saw my data being referenced in ComputerWorld. Yet at no time did I, Randall C. Kennedy, ever represent the exo.blog, xpnet.com or Devil Mountain Software, Inc., to the media outside of InfoWorld, nor was I ever directly quoted as such in any of Gregg Keizer’s articles or articles from other authors or publications.

    So, in the Gregg Keizer-authored article that he acknowledges reading, Galen would have seen copious references to a “Craig Barth, CTO.” And yet he makes no mention of this contradiction. Nor does he take issue with my repeated claims to be the owner of the exo blog and also the person seeking to get Gregg Keizer and others to quote me (in the guise of Craig Barth) and exo.blog’s content.

Needless to say, this is damning to InfoWorld. Not only does it show that members of their senior executive staff were aware of the ruse, they were also actively working with me to take advantage of the situation, to the benefit of both parties: Me, by allowing me to quote from the Craig Barth-fronted exo.blog’s content in my Enterprise Desktop blog; and InfoWorld, by assigning a staff writer to create original InfoWorld content based on the exo.performance.network’s data and conclusions.

Important: I am not seeking absolution here. I accept that what I did crossed the line, no matter how benign my intentions. Rather, what I am doing here is exposing the lie behind InfoWorld’s high-handed dismissal of their leading blogger because, as they claim, he “lied to them.” On the contrary, InfoWorld was in on the ruse from the earliest stages, and the fact that they looked the other way – and then tried to cover their exposure by hanging me out to dry – speaks volumes about the “ethics” and “integrity” of InfoWorld as a publication and of IDG as a whole.

So, without further ado, here is the email exchange that InfoWorld wishes never took place. The following excerpts were taken directly from the last reply in a series of four emails that passed between Galen Gruman and myself. I’ve highlighted the most relevant bits in red so that they stand out from the bulk of the email text. I’ve also re-ordered the original messages and replies so that they proceed, from the top down, in chronological order vs. the normal bottom-up threading applied by Outlook.

Finally, I’ve added breaks/numbered headers identifying each stage of the conversation. You can see the original, unformatted contents of the email thread by clicking this link.

Note: The SMTP header is included at the bottom of this post showing the date, time and mail delivery path taken by the 2nd Reply From Galen message from which the various thread sections below were excerpted.


1. The Initial email from me:

From: Randall C. Kennedy [rck@xpnet.com]
Sent: Wednesday, February 10, 2010 3:17 AM
To:

Galen_Gruman@infoworld.com
Subject: Story Idea - Browser Trends

Galen, 
  
Here’s a story idea: A look at browser usage trends within enterprises, using the repository data to back-up our analysis/conclusions. Here are a couple of examples that we could build off of: 
  
 http://exo-blog.blogspot.com/2010/02/ies-enterprise-resiliency-one.html 
  
 http://exo-blog.blogspot.com/2009/09/ie-market-share-holding-steady-in.html 
  
Could be a good, in depth, research-y type of article, one that pokes holes in assumptions about IE’s decline, the nature of intranet web use, etc. 

Anyway, let me know if you think this is worth exploring…thanks! 

RCK

2. Galen’s First Reply:

From: Galen_Gruman@infoworld.com
Sent: Wednesday, February 10, 2010 10:51 AM
To: rck@xpnet.com
Subject: Re: Story Idea - Browser Trends
 
Hey Randy. 
 
You covered the IE aspect last fall in your IW blog, and I saw that CW did a story along these lines based on your exo blog. So I'm not sure there's a further story with that theme. 
 
I'd be more interested in what it takes for IT to move off of IE6. MS is encouraging them to do so, and Google and others are now saying they're dropping IE6 support. Of course, reworking
IE6-dependent internal apps means spending time and money, so IT has an incentive to not do anything for as long as it can avid doing so. Unless there's a forced requirement to shift (I can't imagine companies saying users MUST use IE6 for legacy stuff and Firefox for Google and other providers' services, and I don't believe you can run or even install multiple versions of IE at the same time in Windows). Or maybe the shift away from IE6 is not as hard as IT may fear. What does it take? What could force IT to pull the trigger? 
 
FYI I've asked Paul Krill to come up with a regular news story based on the exo data, much like how Gregg Keizer does the regular stories based on NetApplications data. It makes sense for IW to do the market monitoring stories from a business-oriented data pool, and that would also create news stories that are syndicated that highlight the exo system. I'm not sure if Paul should or can do something as predictable as Gregg always (he always does a Web browser share story and an OS share story), so thoughts welcome on that. Of course, your data is available all the time, so monthly is arbitrary; NetApp releases the data monthly, giving Gregg a predictable but inflexible schedule.


--
Galen Gruman
Executive Editor, InfoWorld
galen_gruman@infoworld.com
(415) 978-3204
501 Second St., San Francisco, CA 94107

3. My Reply to Galen:

From: Randall C. Kennedy [rck@xpnet.com]
Sent: Wednesday, February 10, 2010 2:44 PM
To: 'Galen_Gruman@infoworld.com'
Subject: RE: Story Idea - Browser Trends

Galen,
 
  I have no problem with Paul Krill or anyone else writing-up my usage data (which took some very clever programming, not to mention some extensive data shaping/massaging, to generate). However, I’d appreciate it if he could follow
Gregg’s lead
and reference/link to the exo.blog and xpnet.com as the source of the data (i.e “Researcher’s from our affiliate, the exo.performance.network, have discovered…etc.”). This will help me to further establish xpnet.com as a distinct research brand and to begin to create a parallel distribution mechanism for my original content – one I can monetize through sponsorships, etc. It’s important that I raise the public profile of xpnet.com, and that means currying the attention of every beat writer, pundit or analyst I can find, both inside and outside of IDG. So if Gregg, or Paul – or even Ed Bott or Ina Fried – want to quote my stuff, I’m thrilled.
 
  Fortunately, with InfoWorld mired in a “race to the bottom” (as Doug calls it) of the tabloid journalism barrel, there really should be little overlap between the kind of hard, data-driven research I’ll be publishing through my own blog and the “push their buttons again” format that seems to best fit InfoWorld’s mad grab for page views (quality be damned). Call it synergy. :)
 
RCK

4. Galen’s Second Reply:

From: Galen_Gruman@infoworld.com
Sent: Wednesday, February 10, 2010 11:13 PM
To: rck@xpnet.com
Subject: Re: Story Idea - Browser Trends

That was the idea.
 
And I take exception to the "race to the bottom" comment. Doug of all people should know that's not true. Otherwise, there would be no Test Center, never mind the massive investment it takes relative to its page views. That's a quality play. And even our populist features are good quality, not throw-away commentary.
 
You can push buttons based on real issues and concerns -- your Windows 7 efforts in 2008-09 showed that -- not on Glenn Beck-style crap.
 
If we wanted to go the "race to the bottom" approach, believe me, there'd be no Test Center, no features, and no blogs like yours, mine, McAllister's, Prigge's, Venezia's, Bruzzese's, Grimes', Heller's, Samson's, Rodrigues's, Lewis's, Marshall's, Linthicum's, Tynan-Wood's, Knorr's, Babb's, or Snyder's -- oh, wait, that's 90% of what we publish. Most of the other 10% --
Cringely and Anonymous -- are populist but hardly low-brow. That leaves the weekend slideshows from our sister pubs in the "race to the bottom" category. 
-- 
 
Galen Gruman
Executive Editor, InfoWorld.com
galen_gruman@infoworld.com
(415) 978-3204
501 Second St., San Francisco, CA 94107

 


So far, I’ve demonstrated that Galen Gruman was intimately aware of the nature and structure of Devil Mountain Software, Inc., the exo.blog and my relationship to both. However, he wasn’t alone. Editor in Chief Eric Knorr was equally familiar with DMS and all aspects of my research publishing activities outside the IDG fold. Which is why, when he heard of the controversy over the infamous Windows 7 “memory hog” article at CW – an article that quoted Craig Barth extensively and made absolutely zero mention of Randall C. Kennedy – his first reaction was to send me the following email:

From: eric_knorr@infoworld.com
Sent: Friday, February 19, 2010 11:43 AM
To: randall_kennedy@infoworld.com; Galen_Gruman@infoworld.com;
Doug_Dineley@infoworld.com


Subject: seen this?

 

http://blogs.zdnet.com/hardware/?p=7389%20&tag=content;wrapper

 

Is it war?

All of which begs the question: If Eric was as ignorant of the Craig Barth ruse as he claims to be, why would his first reaction to reading a story about some controversy involving Craig Barth and the exo.performance.network be to contact me?

The answer is simple: Because he knew. Just as Galen Gruman knew. They both were complicit in perpetuating the myth of the Craig Barth persona. And since they represent two-thirds of the senior editorial brain trust at InfoWorld (Doug Dineley was never clued in), this means that, for all intents and purposes, the publication known as InfoWorld was directly supporting my efforts to obscure my relationship to the exo.blog - the very same exo.blog they both keep commenting on in the above email exchanges - by allowing me to put forth a fictitious character as the CTO of DMS.


And here, to address the inevitable conspiracy theorist challenges to the validity of the above email record, is the header info for this particular message. The message was sent as a response to me, Randall C. Kennedy (rck@xpnet.com) from Galen Gruman (Galen_Gruman@infoworld.com) on February 10th, 2010.

Return-Path: <Galen_Gruman@infoworld.com>
Delivered-To: rck@1878842.2041975
Received: (qmail 21591 invoked by uid 78); 10 Feb 2010 18:12:51 -0000
Received: from unknown (HELO cloudmark1) (10.49.16.95)
  by 0 with SMTP; 10 Feb 2010 18:12:51 -0000
Return-Path: <Galen_Gruman@infoworld.com>
Received: from [66.186.113.124] ([66.186.113.124:49966] helo=usmaedg01.idgone.int)
by cm-mr20 (envelope-from <Galen_Gruman@infoworld.com>)
(ecelerity 2.2.2.41 r(31179/31189)) with ESMTP
id A4/73-22117-327F27B4; Wed, 10 Feb 2010 13:12:51 -0500
Received: from usmahub02.idgone.int (172.25.1.24) by usmaedg01.idgone.int
(172.16.10.124) with Microsoft SMTP Server (TLS) id 8.1.393.1; Wed, 10 Feb
2010 13:11:52 -0500
Received: from USMACCR01.idgone.int ([172.25.1.25]) by usmahub02.idgone.int
([172.25.1.24]) with mapi; Wed, 10 Feb 2010 13:11:57 -0500
From: <Galen_Gruman@infoworld.com>
To: <rck@xpnet.com>
Date: Wed, 10 Feb 2010 13:12:45 -0500
Subject: Re: Story Idea - Browser Trends
Thread-Topic: Story Idea - Browser Trends
Thread-Index: AQH7Zdt+Sb1cTPao5c3kdPVIphPlggLM2MUMAeHPlZWROLhPpw==
Message-ID: <C798371E.20D5A%galen_gruman@infoworld.com>
In-Reply-To: <003e01caaa35$a0b97de0$e22c79a0$@xpnet.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-Entourage/13.3.0.091002
acceptlanguage: en-US
Content-Type: multipart/alternative;
boundary="_000_C798371E20D5Agalengrumaninfoworldcom_"
MIME-Version: 1.0

Enjoy the carnage!

RCK


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...

Wednesday, March 10, 2010

(Trends): Windows 7 Doubles Market Share in Q1

Windows 7 is on a tear. The latest statistics from the exo.repository show Microsoft’s newest client OS installed across 11% of participating systems. That’s over double the percentage that reported running Windows 7 at the beginning of the year, and the fastest penetration of any Microsoft desktop OS on record.

Figure 1 - Community OS Share Today

The big loser in this sea change is Windows XP, which saw its usage share drop by over 3 percentage points. Windows 7’s immediate predecessor, Windows Vista, took a similar hit, dropping nearly 3 percentage points in Q1, 2010. Over the past six months, Vista has dropped nearly five percentage points, again mostly due to heavy adoption of Windows 7.

Figure 2 - Community OS Share on 1/4/2010

That Microsoft’s new OS could make these kinds of inroads, in such a short period of time, is testament to product’s popularity. If this trend continues, Windows 7 will easily overtake Windows Vista as the second most widely installed desktop OS by the end of 2010, and could potentially displace Windows XP atop the OS usage share heap by sometime in early 2012.

Since this will also be the timeframe in which Microsoft begins ramping up the “Windows 8” hype machine, it will be interesting to see if the company ends up facing a similar situation to the one it faced with XP vs. Vista: A wildly popular OS that everyone is satisfied with and that nobody feels any urgent need to upgrade from. When that happens, the wheels may once again come off the Windows upgrade treadmill, forcing Microsoft to find even more creative ways to convince its customers that, while Windows 7 is great, they really do need the “next big thing.”

Note: The above statistics were generated from the over 240 million system metrics records collected from the over 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object (for free) on your own site or weblog, please visit our web site: www.xpnet.com.

Read more...

Monday, March 8, 2010

(Editorial) When Microsoft Breaks Windows

Life as a Windows developer has its ups and downs. On the positive side, you’re associated with the most popular computing platform in history, which translates into lots of potential clients. But this also means you’re subject to the design whims of a notoriously proprietary software company. And as often as not, these changes come back to bite you in the most unusual places.

Take my most recent case: Our largest commercial client, a financial services firm, asked us to modify our DMS Clarity Tracker agent to collect the GDI Object count for each running process. FYI, Tracker already collects a variety of process metrics, including critical CPU utilization and memory counters. However, since this client had some bad experiences with GDI Object handle leaks in the past, they were eager to see this metric added to our collection pool. And since this firm is our best commercial customer, with thousands of seats licensed across one of their largest divisions, we were eager to assist them.

And thus began my odyssey into the wonderful world of forgotten Win32 APIs. It began when I started researching how to collect the GDI Object counter value. Since it’s not part of the regular process object in Performance Monitor, I was forced to step outside our normal methodology (i.e. use PDH.DLL to create a handle to the desired perfmon counter) and look at alternatives.

The first (and really, the only practical) suggestion I encountered was to use the Win32 API’s GetGuiResources function to read the value directly from the target process in memory. However, since our current agent architecture requires sampling the surrounding environment once every second (and then averaging the collected values every 15 seconds), I was understandably concerned about overhead. The idea of executing multiple (50-90 or more, depending on the task list) OpenProcess and GetGuiResources calls in quick succession, every second, gave me pause. After all, these calls aren’t necessarily optimized for low-overhead, like the aforementioned PDH calls are, and I thought I might have to back off on the granularity and simply collect the values once every 15 seconds as an instant value.

Fortunately, the APIs proved to be quite lightweight, and I was able to quickly construct a routine that paralleled our normal PDH collection method, but calling the OpenProcess and GetGuiResources functions instead of the various PdhQuery functions we did for the other, PDH-based counters. The net result was an elegant solution that grabbed the data we needed and integrated it seamlessly with our existing collection model.

And more importantly, it worked – at least at first. Running from within the Visual Studio IDE, our new GDI Object collection logic functioned flawlessly. However, when we took the agent out of the IDE and compiled it to run in its native environment – as a service executing under the LocalSystem account – the GDI Object logic broke down. Instead of getting the desired count values, the GetGuiResources function returned zeros for nearly every process.

I say nearly every process because, for a handful of tasks – most notably, those running as services and which also consumed GDI Objects (i.e. not very many – GDI is mostly for interactive apps) – the function returned what seemed to be valid data. Worse still, the collection code worked perfectly under Windows XP, both interactively and as a service. It only broke down when we deployed the agent under Vista or Windows 7, and then only if we ran the agent as a service under the LocalSystem account.

I didn’t know it at the time, but I was about start down a slippery slope into Win32 API debugging hell. My first theory was that it was a permissions issue. My OpenProcess calls must have been failing due to Vista/7’s tighter IPC security. However, a check of the LastError value showed no faults. And when I subsequently tested to see if I could read other, non-GDI metrics from the process – for example, using GetProcessMemoryInfo to read its Working Set counter – my call’s succeeded each time, using the same handle that was failing with GetGuiResources.

I could even terminate the target process – running as LocalSystem gave me free reign over the system. However, no matter what I tried, I could not get GetGuiResources to return valid data. Another check of LastError, this time for the GetGuiResources call itself, left me even more confused. It reported a result code of INVALID PARAMETER, which made no sense since the only two parameters that the function accepts were the (now confirmed valid) process handle and the requested resource type (GDI or User object count). It was a real hair-pulling moment.

Eventually, I tried enough variations of the above methodology that a pattern began to emerge from the madness. For example, if I ran the code interactively on the desktop, it would dutifully record the GDI Object counts for all of the interactive tasks (e.g. explorer.exe and whatever else was running on the task bar or in the system tray). And when I ran the code as a service – either under the LocalSystem account or using an Administrator-level user account – it would record GDI Object count values only for tasks that were running as non-interactive services.

It was then that the light bulb finally came on. I remembered reading how Vista (and Windows 7) tighten security by moving all interactive user tasks into a second console session (Session 1) and away from the primary console session (Session 0), which was now dedicated solely to non-interactive services. The idea was to eliminate the kind of backdoor vector that led to the infamous “shatter attack” exploit under Windows XP. By isolating service processes to a separate console session, and prohibiting them from interacting with the user’s desktop (which was now running in a different console session), they could suppress such attacks and reduce Windows’ exposed surface area.

Of course, such a radical change introduced some notable compatibility issue. For starters, services that relied on the “Allow service to interact with desktop option” were immediately cut-off from the user’s they were trying to interact with. And, apparently, the move to a dedicated services Session 0 also had the effect of breaking the GetGuiResources API call when executing across session boundaries. So while my agent service running in Session 0 could attach to the processes of, and read data from, tasks running in Session 1 (or any other user session), any attempt to read the GDI Object counter data off of these processes failed – ostensibly because the User and GDI resources these tasks rely on exist solely inside of the separate, isolated user session.

At least that’s my theory so far. The truth is that I’m not sure what the real problem is. While the above analysis seems to fit the facts, there is a dearth of information on the subject. Google searches for “GetGuiResources error” return lots of references to permissions issues and other false leads, but nothing about the call failing across session boundaries.

Fortunately for me, my financial services client is still running Windows XP. They have no plans to move to Windows 7 for at least another year, in large part due to the myriad undocumented incompatibilities they will have to mitigate – like the one I’ve outlined above.

Perhaps I’ll find a workaround some day and finally get Tracker’s GDI Object count logic working under Vista and Windows 7 (I’m open to suggestions – leave your ideas in the Comments section of this post). But regardless, this whole affair was a learning process for me. I gained some valuable new skills, and mastered a few unfamiliar techniques – all part of my quest to quash one very mysterious bug.

So count this as one instance in which a developer took one of those classic Microsoft lemons (i.e. the company breaking Windows in a way that’s both unobvious and difficult for 3rd parties to trace) and turned it into lemonade.

Cheers!

RCK


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...

(WCPI): Windows 7 = Moore’s Law In Action

Want to know what separates Windows 7 users from their XP-loving contemporaries? Try 342 additional execution threads chewing-up an extra 711MB of RAM while spending over 2/3 more time running in Windows’ privileged “kernel mode.”

At least that’s what the latest Windows telemetry data uploaded to the exo.repository seems to indicate. By analyzing the approximately 1 to 1.7 million process metrics records, per system, collected each week from our community of over 24,000 registered users, we were able to create a profile of each user type and the composition of their typical application workloads.

For example, the average total process private bytes for systems in our pool of over 13,000 Windows XP users comes in at just over 564MB. Meanwhile, this same metric measure across systems from our nearly 5,000 Windows 7 users shows an average of just under 1.28GB (Vista comes in at just under 1.30GB).

A similar trend emerges when you look at the number of concurrent execution threads running on each system type. Windows XP users typically have anywhere from 40-60 concurrent, running processes spawning an average of 653 execution threads in total. By contrast, Windows 7 users typically have from 50-80 concurrent processes, and these, in turn, spawn an average of 995 execution threads. Vista systems report a similar process count, but with a higher total thread count (1013)across these processes.

Win XP

Vista

Win 7
Private Bytes 564MB 1295MB 1275MB
% Privileged Time 8.4 15.2 14.3
Thread Count 653 1013 995

Another interesting metric, % Privileged Time, shows that process threads under Windows 7 spend, on average, 70% more time running in kernel mode than threads running under Windows XP (for Vista, this delta is 81%). This shift towards privileged execution could have several causes:

  1. Newer applications, like Office 2007/2010 or Internet Explorer 8.0, spending more time executing in kernel model.
  2. New Windows services not present in XP interacting with the kernel and other low-level code, like device drivers.
  3. A more complex kernel, which itself spins an additional 40-50 execution threads in a default installation, taking more time to process kernel-mode requests.

Regardless of the cause, the fact that threads under Windows 7 are spending more time executing in kernel mode can have a direct impact on system performance. On the positive side, code that is executing in kernel mode tends to run faster since it doesn’t need to repeatedly transition into kernel mode to accomplish portions of its work (it’s already there).

However, code running in kernel mode is also inherently more difficult to multitask – it is essentially in control of the system and, in the case of a device driver’s Interrupt Service Routing (ISR), cannot be interrupted. Thus, more time spent in kernel mode often translates into reduced overall system performance, a scenario which typically can only be mitigated through the introduction of a more powerful CPU.

Bottom Line: Windows 7 users place significantly higher demands on PC hardware than users of its popular predecessor, Windows XP. Their workloads are typically larger (Private Bytes), more complex (Thread Count) and spend a greater amount of time running in kernel mode (% Privileged Time).

Fortunately, this additional computational overhead is mitigated by the fact that most Windows 7 users are running the OS on the latest generation of PCs, and this additional computing “horsepower” allows the OS to deliver new functionality and services while remaining responsive to user requests.

In other words, Windows 7 = Moore’s Law in action.

Note: The above statistics were generated from the over 14 billion process metrics records collected from the over 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object (for free) on your own site or weblog, please visit our web site: www.xpnet.com.


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...