Friday, July 16, 2010

(Announcement): Pulling the Plug

It’s been nearly 4 years since we first began offering our free, hosted performance analysis services to the general public. During that time we’ve registered over 24,000 users and assembled the world’s most comprehensive repository of real-world Windows metrics data ever created.

By almost every measure, the exo.performance.network has been a success. Which is why it pains us to announce that, as of July 31, 2010, we will be shuttering the service for good. All access to the repository site will blocked at that time, and both the exo.widgets and exo.charts objects will cease to function. Likewise, OfficeBench 7 will become non-functional since it relies on the repository for results calculation and storage.

Our reason for discontinuing the service is simple: It doesn’t make us any money. We originally launched the exo.performance.network for the purpose of compiling system and application metrics and then translating our findings into new and compelling research content. However, a viable market for said content never materialized, and all attempts to secure a sponsor for the site have failed.

Recently, we toyed with the idea of upgrading the site’s functionality and then charging a modest monthly fee for access to advanced features (see previous entry regarding DMS Clarity 10 and Windows Pulse). However, we quickly concluded that the effort required to add the necessary payment processing and subscription management capabilities was impossible to justify – especially when we had no guarantee that anyone would be willing to pay for such a service should we proceed with the conversion.

So, in the end, we decided the best thing to do would be to simply pull the plug. As of July 31st, we’ll be taking down our co-located servers and converting xpnet.com to a simple, static web site. Then, we’ll refocus our energies on servicing our existing corporate clients and improving the commercial version of the DMS Clarity Framework.

It’s not the ending we anticipated when we started out on this journey over 4 long years ago, and we’ll be sad to see those server LEDs go dark. We’d like to thank those who contributed to the repository and helped make the exo.performance.network a truly unique project. It’s been an interesting ride.

Note: Anyone interested in our commercial offerings can find out more by emailing us at info@xpnet.com.

Read more...

Monday, May 31, 2010

(Editorial) Announcing DMS Clarity Suite 10

After two months of heads-down development, punctuated by several design breakthroughs and a very successful commercial beta cycle, we’re pleased to announce the release of DMS Clarity Suite 10.

This next generation of the DMS Clarity Framework provides us with a robust new platform on which to base a variety of exciting on-site and hosted performance monitoring and management offerings.

image_thumb1Figure 1 – DMS Clarity Suite 10

Highlights include:

  • A complete AJAX makeover for a more interactive and responsive site UI (i.e. no more postbacks).
  • A highly componentized architecture, with self-contained charting and analysis widgets that can be detached from the base frameset and configured as stand-alone monitors for system, process and network metrics.
  • Charts are now fully interactive, with each data point serving as a drill-down link to further refine the report parameters. Full chart timeline scrolling/panning support is also included.
  • Additional system and process metrics (Handle Count, GDI Objects) as well as better integration of Custom Counters, including extensive charting support.

Per our existing licensing model, we’re offering DMS Clarity Suite 10 both as a hosted solution – for shops that don’t wish to maintain their own performance monitoring/management framework – and as a traditional, customer deployed solution for on-site scenarios.

We’ll also be offering a subset of the DMS Clarity 10 functionality through our exo.performance.network site. With over 24,000 users, the exo.performance.network is the world’s largest repository of real-world metrics data as collected from Windows PCs and servers from around the globe.

The new DMS Clarity 10 functionality will be offered through our forthcoming Windows Pulse service, which will serve as a direct replacement for – and major capabilities upgrade to – our existing Clarity 9-based widgets and tools.

We’re also providing a comprehensive “dashboard” solution as part of Windows Pulse. Dubbed the “Pulse Pad,” this free-form, AJAX-based UI will allow users to configure and “dock” individual widgets to a persistent presentation “canvas” that will preserve both the widget configuration parameters and on-screen layout between sessions. Here’s a sneak peak:

image Figure 2 – The Windows “Pulse Pad”

Users will be able to re-arrange and position (drag & drop) widgets at will, as well as “undock” them for use as stand-alone monitoring objects. Support for multiple pads, each with its own set of up to 10 discrete widgets, is also in the pipeline. We hope to have a public beta version available by the end of June, and the formally launch the service later this summer.

Note: Questions and press inquiries about DMS Clarity Suite 10 and the Windows Pulse services should be directed to our general information email address: info@xpnet.com.


Get This and Similar Charts at www.xpnet.com

Read more...

Tuesday, April 27, 2010

(Editorial) Gizmodo Got What They Deserved

A comeuppance. That’s how I describe the recent Gawker-Gizmodo-iPhone theft debacle. What the organization in question did – paying cold, hard cash for what was ostensibly stolen property – was plainly criminal, and those behind the act are now being held accountable.

One would hope that such a well-publicized incident would serve to temper the blogosphere’s appetite for sensationalism. The spectre of illegal or immoral actions leading to very real consequences (including the potential for jail time) should be enough to give Gizmodo’s contemporaries pause. However,I fear the lesson has already been lost on a community that fashions itself as the “anti-media,” but which runs for cover behind so-called “shield” laws designed to protect the real journalists they so often mock.

And make no mistake: Bloggers are not journalists. Real journalists have ethics. They check their facts and follow well established rules of conduct: Don’t fabricate; don’t obfuscate; don’t steal. Most high-profile bloggers, by contrast, follow a looser, “shoot first and ask questions later” philosophy. It’s all about beating the other guy to the punch by being the first to break that big scoop.

Note that I speak from experience. As InfoWorld’s most successful blogger throughout 2008-2009, I spent much of my time trying to tap into the industry zeitgeist. And while my marching orders frequently came from above – “trash this, promote that” – it was left up to me to figure out how to best implement that editorial vision.

I chose the persona of “Randall C. Kennedy – Industry Curmudgeon,” but at no time did I ever fashion myself a true journalist. Rather, I was just some guy with a poison pen regurgitating supplied opinions on the latest hot topics – a cog in a new media machine who’s sole purpose was to feed an insatiable appetite for page views.

But despite my well-documented self-loathing, it wasn’t until I was on the receiving end of a blogger-led new media assault that I realized just how far removed I was from the shores of professional journalism. In fact, as I watched ZD Net’s Larry Dignan and crew fabricate, obfuscate and steal my reputation away from me, I felt like I was staring into a mirror.

The tables had turned. The shoe was on the other foot. I had gone from victimizer to victim, and my eyes were finally opened to just how violent the blogosphere had become. Never mind that Mr. Dignan’s smear campaign has since been discredited (his subsequent retraction and acknowledgement that our Wall Street clients do in fact exist and continue to use our software to this day was most touching). The damage was done, and Google will see to it that his fabrications long outlive his credibility.

As will mine – and every other high-profile blogger who has abused their position to promote an agenda. We’re all guilty of “playing journalist” while thumbing our noses at the rules of the game. But the Gizmodo case signals a new low in the blogosphere’s storied history of unethical behavior.

Sensationalism, smear campaigns and now outright criminal activity. I’m glad I’m no longer a part of that community, and I hope the authorities seize this opportunity to teach the industry a lesson by throwing the book at those involved.

RCK


Get This and Similar Charts at www.xpnet.com

Read more...

Monday, April 26, 2010

(Stats) Office 2010 Delivers a Performance Boost

In a stunning reversal of nearly twenty years of progressive performance erosion, the latest incarnation of Microsoft’s ubiquitous productivity suite, Office 2010, is actually faster than its immediate predecessor, Office 2007.

Testing with the cross-version OfficeBench 7 test script shows Office 2010 to be roughly 9% faster overall when running on an identically configured Windows 7 desktop environment. This surprising result constitutes the first time in the decade-long history of OfficeBench that a newer version of Microsoft Office outperformed the one it was designed to replace.

O2010 ResultFigure 1 – OfficeBench 7 Results for Office 2010

Historically, new versions of Office have been slower than their predecessors thanks to the inclusion of additional features and a generally more complex code path. For example, moving from Office 2000/XP on Windows 2000 to Office 2003 on Windows XP showed a 15-20% performance decrease under OfficeBench, while moving from Office 2003 under Windows XP to Office 2007 on Windows Vista showed a whopping 40% or greater decline in overall OfficeBench script throughput.

O2007 Result Figure 2 – OfficeBench 7 Results for Office 2007

However, with the release of Windows 7, Microsoft has demonstrated a newfound ability to keep the “code bloat” demons in check, with the net result that Windows 7 performs on par with, and in some cases better than, Windows Vista.

Now, this same disciplined development model – a byproduct of veteran Office business unit manager and now Windows show runner Steve Sinofsky’s “less is more” philosophy – is reaping rewards for the desktop applications side of the house, which can market Office 2010 as a performance upgrade in addition to promoting its myriad functional enhancements.

Of course, benchmark results like the ones quoted above are intrinsically relative. For example, though Office 2010 provides a performance edge over Office 2007 on Windows 7, the combination of the newer Windows and Office still delivers a test script completion time that is 15-20% slower than Office 2007 running on Windows XP (SP3).

Note: You can conduct your own cross-version comparison test by downloading the OfficeBench 7 test script. It’s easy to use, works with any combination of Windows/Office, and is completely free. Grab your own copy today!


Get This and Similar Charts at www.xpnet.com

Read more...

Thursday, April 8, 2010

(Editorial) Used iPads to Begin Flooding eBay

Walking and chewing gum. It’s a simple idea – you do one thing while at the same time doing another, with your brain shuffling between the two (and various unrelated autonomic functions) to keep the whole parade in step.

Modern computers are similarly adept at juggling concurrent tasks -we PC users call it “multitasking.” Yet for users of iPhone OS-based devices, including the new iPad, multitasking is a completely foreign concept. You can’t walk and chew gum with the iOS. In fact, you can’t walk and do much of anything with an iPhone/Pod/Pad other than carry a tune (current iOS devices can play audio in the background, but that’s about it).

Why so many iPhone/Pod users are willing to put up with such a limited functional model has always been a mystery to me. Maybe it’s the fact that most people simply don’t expect too much from a “smart” device. After all, it’s not a real computer – it’s a phone (or media player). The fact that you can get online at all with such a device seems like a huge leap forward to many people, especially less sophisticated consumers with cash to burn.

Now we learn that the upcoming iOS 4 will likely improve on this situation, but only a little. Depending on the mood of Apple’s often arbitrary application approval process (Google Voice, anyone?), certain apps will be allowed to run in the background under iOS 4. So you may finally be able to walk and chew gum – and perhaps even carry that tune as well. But don’t expect to be able to walk and carry an umbrella (not approved), or to walk/sip a drink/read a paper or map (too many tasks at once).

Again, people don’t expect too much from a “smart” phone, so the continued lack of true multitasking will likely go unnoticed. But even the idiot iPhone-using masses know that PCs are supposed to multitask. Concepts, like switching away from one running application to do something in another running application – all while the first application is still running - are now thoroughly ingrained in our collective consciousness.

It’s a level of functional convenience that we’ve grown to expect in any serious computing device. Which is why I predict a high degree of long-term customer dissatisfaction with Apple’s latest and greatest, culminating a in a glut of used iPads hitting eBay (just in time for Christmas).

You see, customers are buying the iPad with the expectation that it will somehow replace all of their other computing devices. Such has been the media hysteria surrounding the product’s launch. However, in reality the iPad is nothing more than a glorified “companion” device – a limited function platform designed to compliment a Mac or PC while roping Apple’s customers ever more tightly into the iTunes sphere of influence.

So, when these early adopters - especially those swayed by the media hype - begin to bump into these very real functional limitations, they’ll likely feel cheated. And as they slowly gravitate back to their familiar PC environments (including powerful new Windows 7-based tablet PCs that outclass the iPad in almost every way), they’ll begin dumping their now underutilized Apple devices into the online auction meat grinder.

Hence my prediction: By year’s end there will be a glut of used iPad’s flowing through eBay, Amazon, et al. So resist the urge to splurge now and count your savings later.

RCK


Get This and Similar Charts at www.xpnet.com

Read more...

Friday, March 26, 2010

(Editorial) Why the Client Hypervisor is Doomed

Big surprise! Both VMware and Citrix have fallen behind schedule in delivering their “bare metal” hypervisors for client computing. Both had promised to deliver solutions by the end of 2009, but now VMware has reset that goal to the end of this year while Citrix has stopped talking about ship dates altogether.

So, what happened? In a word, hardware. Or more precisely, the ever changing cornucopia of PC hardware devices and configurations. A “bare metal” hypervisor has to sit at the very bottom of the software stack, where it directly manages, and controls access to, the underlying hardware devices. And doing those two things requires hardware-specific control software – i.e. device drivers.

Developing a comprehensive library of device drivers is no easy task (just ask Microsoft). Even assuming that you can create enough generic or “pass through” type modules to allow the majority of common devices to function, there will still be the inevitable subset of components or peripherals that refuse to cooperate.

It would only take handful of (highly publicized) customer run-ins with such finicky devices to give the “bare metal” client hypervisor a long term compatibility black-eye. Which is why these leading vendors continue to test – and wait.

But waiting (and testing) won’t solve the long-term problem of PC hardware churn. Unlike in the server space, where hardware evolves more slowly and where there are fewer basic configurations to support, the client PC space is in a constant state of flux. The never ending performance arms race, coupled with a near constant stream of innovation at both the internal and external component level, has turned the PC platform into a moving target. Blink, and you’ve missed it.

What is needed is a layer of hardware-level device abstraction, with groups of discrete components functioning as a logical block and accessible through a relatively static interface model. Intel is doing its best to promote as much through its vPro and similar management initiatives. However, these sorts of solutions require significant buy-in from the very OEM partners who stand to lose by making client computing environments portal across hardware platforms. Why would HP want to make it easier for you to move your stuff over to Dell or Acer?

And then there’s the 800lb gorilla in the room next door. Microsoft, which stands to lose the most in a hardware-abstracted world, has been relatively silent on the issue. Ask them about “bare metal” hypervisors on the client and they’ll respond that they “already have one…it’s called Windows.”

In fact, much of what a “bare metal” hypervisor does is entirely redundant in a Windows client environment. It’s an abstraction (client hypervisor) of an abstraction (the Windows Hardware Abstraction Layer). Which makes me wonder why you would really want one in the first place.

After all, it’s not like the current generation Windows platform is really tied to the underlying hardware. Technologies like Plug & Play and improved hardware auto-detection/driver-reconfiguration have made the process of creating a portable, hardware-abstracted Windows client image relatively trivial. This was the whole point of developing WIM and other post-XP installation technologies: To make PC imaging easier.

So, if the primary goal of a “bare metal” client hypervisor is to further abstract the OS from the hardware (and I give zero credence to the “other” reason being bandied about: Running multiple OS VMs on a single PC), and if this task is already handled quite effectively by Windows and its well-established device driver ecosystem, then the only real reason to pursue such a strategy is if  you’re trying to do an end-run around Microsoft’s desktop hegemony.

Which is exactly what VMware and Citrix (not to mention Microsoft’s fair weather friends at Intel) are trying accomplish. They want to remove the Windows kernel/HAL/driver model as the gatekeeper to the PC client world. As such, their actions represent a clear and present danger to the ongoing survival of Microsoft’s core desktop OS business.

And we all know what happens to companies that post a threat to Microsoft’s bread-and-butter revenue stream. First they pan you. Then they copy you. And, finally, they bury you – typically with one of those infamous “free” solutions that seems to fit the bill but still somehow locks you into their world.

A “bare metal” hypervisor on the desktop? Without Microsoft’s direct support?

Good luck VMware and Citrix…you’re going to need it!

RCK


Read more...

Tuesday, March 23, 2010

(Editorial) Web Developers: Time to Dump Firefox?

As a commercial web developer, I’m constantly on the lookout for new trends in browser adoption and usage. After all, there are only so many hours in a day, and investing time and energy supporting a faltering standard is both frustrating and inefficient. So it was with some hesitation that I approached our latest project: A complete overhaul of the user interface for our commercial metrics analysis portal site, DMS Clarity Suite 10.

I knew from the last go-around that getting our site to render consistently across the leading browser platforms (legacy IE 6/7 and Firefox) was a chore, one involving lots of dynamic tweaks and clever hacks. Now we were planning to expand this list to include several newcomers, including IE 8 (running in “standards compliant” mode) and Google’s Chrome. The thought of testing, tweaking and re-testing each and every page against four or more separate rendering models was enough to make me start breaking out in hives.

Worse, still, was the fact that, with our DMS Clarity 10 release, we weren’t just overhauling the UI. We were gutting the entire site to make way for a new, highly-visual, componentized interaction model. Gone were the static page layouts of the past. In their place, a collection of discrete rendering widgets that would assembled on the fly to create a fully customizable presentation. These widgets could be re-arranged, broken-out into their own windows and re-attached to other parts of the site in order to better identify and expose the most critical data points. Here’s a screenshot of the net result:

image Figure 1 – DMS Clarity 10 Portal Site (BETA)

Not surprisingly, the project ran behind schedule, with much of the delay attributable to us figuring out how to get identical results across our various target platforms. For example, calculating the window resize values for our slide-out widget configuration panel. Each browser had its own idea of how “big” or “small” a window would become when we executed the window.resizeto() method. To short, and you’d cut-off the panel. Too long and you’d end up with lots of ugly white space.

Our workaround was to read the browser make/version via JavaScript and then dynamically resize the underlying ASP.NET panel control prior to rendering the page - not a complicated task, but one that required a lot of trial and error to get the desired result. It definitely qualified as a “hack” solution in my book, though by all accounts its a fairly common one.

image Figure 2 – Clarity 10 Widget Rendering Consistently

Needless to say, we got to know a lot about the various quirks and rendering oddities associated with today’s web browsers. And by far the biggest PITA to work with – next to legacy IE 6/7 - was Firefox. HTML and CSS that would render consistently on IE 8 and Chrome would always require some hand-tuning for Firefox, while JavaScript code that ran flawlessly under the other browsers would often need at least some minor tweaking for Firefox to be happy.

In fact, it got so bad that we eventually had to expand our base template design to include three major potential rendering models: IE legacy, Firefox and “everybody else” (including Chrome and IE 8). And when even those assumptions proved to be inadequate (offset values that worked for one page would sometimes also work elsewhere, but not consistently), we seriously considered dumping Firefox support altogether.

With most of our commercial customers still using IE for in-house application access, it was a shortcut we could probably have gotten away with. However, in the end we decided to bit the bullet and hand-code the necessary markup and scripting corrections. After all, Firefox is still a major web presence, and we do plan to offer Clarity 10 as a hosted commercial solution later this year.

However, the situation was very much “touch-and-go” there for a while. Had we been under tighter time constraints, or if we had run into any real “showstopper” issues that compromised our design in some fundamental way, we likely would have given Firefox the boot.

Compounding matters is the perception, now shared by many of my contemporaries, that Firefox is in decline. Our own exo.repository numbers still show strong (50%) use among our tech-savvy contributor base. However, those same users are also increasingly turning to Google’s Chrome. Some 25% of systems monitored by the exo.performance.network report Google’s nascent web browser.

Figure 3 – Latest exo.repository Browser Share Statistics

If this number climbs much higher, and if Firefox use takes the kind of nose-dive so many are now predicting, we may have to revisit our decision to continue supporting Mozilla’s browser. With the web gravitating towards the rapidly maturing webkit, and with the latest versions of IE and Chrome converging towards a consistent rendering result, the writing may finally be on the wall:

Save yourself a headache or two and dump Firefox.

RCK

Read more...

Friday, March 19, 2010

(Editorial) Microsoft’s XP Mode Boondoggle

It will go down in history as one of the more anti-climactic “surprise” announcements. Microsoft’s Windows XP Mode, which was billed as an eleventh hour killer feature for Windows 7, arrived with a thud, thanks in large part to its curious need for Hardware Assisted Virtualization (HAV) support.

The company’s media apologists quickly scrambled to defend the decision, pointing out that most new business-class PCs were shipping with HAV-enabled CPUs anyway. However, this argument did nothing to stem the tide of complaints, both from consumers - who purchased seemingly state-of-the-art, multi-core PCs only to learn that they were incapable of running XP Mode – and from small business customers who wished to leverage the capability on existing, non-HAV-supporting PCs.

A year later, and Microsoft has finally caved to the pressure. Just this week the company released an update to XP Mode’s underlying virtual machine engine – Windows Virtual PC 7 – that allows it to run on non-HAV-supporting PCs. All of which begs the question: Why the bizarro requirement in the first place?

Microsoft’s official line has always been that HAV was necessary in order to ensure an “optimal” end-user experience. However, I suspect a more sinister motive. Specifically, I believe that the “Great XP Mode Boondoggle” was in fact a concession to Intel Corporation – a kind of apology for royally screwing-up with the whole Windows Vista “too fat to fit” debacle

By tying VPC 7 to HAV, Microsoft was helping Intel to discourage small business users from buying low-cost PCs sporting “consumer” versions of its powerful dual and quad-core CPUs – parts that were intentionally neutered in order to create differentiation among the company’s myriad SKUs.

Of course, the whole exercise backfired. Savvy users balked at the restrictions (some even built workarounds using VirtualBox or VMware Player), while small business owners and consumers were left dazed and confused by the various XP Mode “compatibility matrices” that cropped up on the Internet

And then there was the matter of Microsoft’s Enterprise Desktop Virtualization (MED-V). A forerunner to VPC 7 and XP Mode, MED-V does not – and thanks to the aforementioned VPC 7 update, never will – require HAV, ostensibly because enterprise customers would never put up with these sorts of “marketecture” shenanigans in the first place

In fact, XP Mode users can likely thank Early Adopter Program (EAP) customers – many of whom have no doubt been evaluating the VPC 7-based MED-V 2.0 in pre-release form – for this reprieve. Now the only question is whether or not Microsoft will provide updated integration components that allow XP Mode VMs to run with good performance and functionality under the non-HAV update

As of right now, the plan is to not release an optimized integration solution for users of VPC 7/XP Mode on non-HAV PCs. But you can be sure that MED-V customers will get them, and this provides hope that somehow they’ll trickle-down: Either unofficially, through some end-user hack; or officially, as part of yet another “mea culpa” update

Regardless, it’s a good day for Microsoft’s small business customers and end-user consumers. The “Wintel” duopoly tried to foist another bogus restriction on its customer base, and the base fought back.

You balked. They blinked. Chock one up for the little guy!


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...

Tuesday, March 16, 2010

(OfficeBench): A Decade of PC Benchmarking

As OfficeBench approaches its 10th anniversary it’s time to look back at the history of this iconic test script. With over a million downloads, and with countless copies of its various incarnations in circulation around the globe, OfficeBench is one of the most widely deployed business productivity test scripts in history. And we’re still improving it, with the latest incarnation – OfficeBench 7 – available for free download from the exo.performance.network web site.

When I first conceived of an Office Automation test script, the popular benchmarks of the day – Winstone and BapCo – were static and inflexible. They only worked on stand-alone PCs, and then only with a “clean install” of the OS. Frustrated by these limitations, and needing to test Terminal Services scalability as part of my contract to Intel’s Desktop Architecture Labs, I sat down and started writing my own, “bulletproof” test script.

The net result, OfficeBench, was a first-of-its-kind linear test script, one that used the programmatic interfaces of Microsoft Office (OLE automation – a.k.a. COM) to drive the suite through a series of common business productivity tasks. The script was simple, reliable and, if necessary, could scale to tens of thousands of concurrent Terminal Services users.

Figure 1 – OfficeBench 7 ( Download )

More importantly, it worked with the existing installation of Office as configured in the runtime environment (no more clean installs). All of which helped to make OfficeBench the lightweight (less than 2MB to download), easy-to-use choice of performance-oriented IT pros everywhere.

All of which brings us to the topic at hand: OfficeBench’s 10 year anniversary. It was a decade ago this week that I released OfficeBench 1.0. Now, with version 7 “in the wild” for six months, we’ve got something even more exciting to reveal: We’re now publishing comparative OfficeBench test results from across the globe.

Using a new, interactive OfficeBench Performance Explorer widget (see below), you can now explore OfficeBench results from across the range Microsoft environments. Filter by Windows or Office version, CPU type or RAM size. The widget will apply your filters and then scour our online database of OffieBench results (almost 8500 users) to return the averaged scoring results for systems that match your specified criteria.

The OfficeBench Performance Explorer (Interactive)

It’s a great way to assess your own PC’s performance and also to see how other combinations of CPU, Memory, Windows and Office Versions perform across a selection of real-world PCs. Use the widget as a base reference for your own testing. Or simply mine the various combinations to see how changing a single criteria – for example, upgrading from XP to Windows 7 – affects the outcome.

Don’t see your particular hardware/software combination represented above? Then download OfficeBench 7 today! You’ll then be able to compare you own configuration to those of similarly equipped users while adding a unique new reference point to our growing repository of real-world performance data.

Read more...

Thursday, March 11, 2010

(Editorial): Incriminating Email Sinks InfoWorld

This is a post I hoped I wouldn’t have to write. After watching for over two weeks as InfoWorld misrepresented the particulars of my resignation as Enterprise Desktop blogger, I feel I’ve given the publication ample opportunity to come clean and admit they were complicit in the Craig Barth ruse.

Since it’s now clear that no such admission is forthcoming, I feel I have no choice but to release the following email exchange which demonstrates, without question, that Infoworld Executive Editor Galen Gruman knew of my ownership of the exo.blog; that he knew I was using it as source material for my InfoWorld blog; and that Gregg Keizer was regularly quoting from the exo.blog and its sole public “representative,” Craig Barth. Moreover, it shows that Galen was willing to assist me in promoting the very same exo.blog content, even going so far as offering to use InfoWorld staff writers to create news copy based on my numbers.

And lest he somehow try to escape scrutiny by passing all blame down to his subordinate, I’ve also included an email from InfoWorld Editor in Chief Eric Knorr (i.e. the guy who “fired” me with much sanctimonious aplomb) which shows that he, too, was clearly aware of my ownership of all sites, content and materials associated with the exo.peformance.network.

Note: Conspiracy theorists, please scroll to the bottom of this post for the prerequisite SMTP header disclosure.

But first, I need to set-up the timeline:

The following exchange took place roughly one week before the infamous ComputerWorld Windows 7 memory article. I was pushing Galen to let me run another story in InfoWorld that would reference the exo.blog data on browser market share, and he replied by pointing out that I had done something similar just a few weeks earlier, with Gregg Keizer quoting from my exo.blog. He also spoke of assigning staff writer Paul Krill to begin quoting from my exo data as part of a new regular news component for InfoWorld.

I responded that I had no problem with him having Paul Krill write up my usage data, but I requested that he approach it the same way that Gregg [Keizer] had in the article Galen had referenced earlier and thus not use my name but instead refer to xpnet.com as a separate entity. I close the paragraph by emphasizing how I was determined to curry favor with every beat writer I could find, including Gregg Keizer, and how there would be no real overlap between the data I publish in my blog (i.e. exo.blog) and what I did for InfoWorld.

A couple of items to note:

  1. At no time prior to the “scandal” breaking did I, Randall C. Kennedy, publish any research articles through the exo.blog. All articles were published through the generic “Research Staff” account and subsequently fronted to the media by the fictitious Craig Barth.
  2. Galen Gruman clearly acknowledges my ownership of both the exo.performance.network and the exo.blog, and notes how he saw my data being referenced in ComputerWorld. Yet at no time did I, Randall C. Kennedy, ever represent the exo.blog, xpnet.com or Devil Mountain Software, Inc., to the media outside of InfoWorld, nor was I ever directly quoted as such in any of Gregg Keizer’s articles or articles from other authors or publications.

    So, in the Gregg Keizer-authored article that he acknowledges reading, Galen would have seen copious references to a “Craig Barth, CTO.” And yet he makes no mention of this contradiction. Nor does he take issue with my repeated claims to be the owner of the exo blog and also the person seeking to get Gregg Keizer and others to quote me (in the guise of Craig Barth) and exo.blog’s content.

Needless to say, this is damning to InfoWorld. Not only does it show that members of their senior executive staff were aware of the ruse, they were also actively working with me to take advantage of the situation, to the benefit of both parties: Me, by allowing me to quote from the Craig Barth-fronted exo.blog’s content in my Enterprise Desktop blog; and InfoWorld, by assigning a staff writer to create original InfoWorld content based on the exo.performance.network’s data and conclusions.

Important: I am not seeking absolution here. I accept that what I did crossed the line, no matter how benign my intentions. Rather, what I am doing here is exposing the lie behind InfoWorld’s high-handed dismissal of their leading blogger because, as they claim, he “lied to them.” On the contrary, InfoWorld was in on the ruse from the earliest stages, and the fact that they looked the other way – and then tried to cover their exposure by hanging me out to dry – speaks volumes about the “ethics” and “integrity” of InfoWorld as a publication and of IDG as a whole.

So, without further ado, here is the email exchange that InfoWorld wishes never took place. The following excerpts were taken directly from the last reply in a series of four emails that passed between Galen Gruman and myself. I’ve highlighted the most relevant bits in red so that they stand out from the bulk of the email text. I’ve also re-ordered the original messages and replies so that they proceed, from the top down, in chronological order vs. the normal bottom-up threading applied by Outlook.

Finally, I’ve added breaks/numbered headers identifying each stage of the conversation. You can see the original, unformatted contents of the email thread by clicking this link.

Note: The SMTP header is included at the bottom of this post showing the date, time and mail delivery path taken by the 2nd Reply From Galen message from which the various thread sections below were excerpted.


1. The Initial email from me:

From: Randall C. Kennedy [rck@xpnet.com]
Sent: Wednesday, February 10, 2010 3:17 AM
To:

Galen_Gruman@infoworld.com
Subject: Story Idea - Browser Trends

Galen, 
  
Here’s a story idea: A look at browser usage trends within enterprises, using the repository data to back-up our analysis/conclusions. Here are a couple of examples that we could build off of: 
  
 http://exo-blog.blogspot.com/2010/02/ies-enterprise-resiliency-one.html 
  
 http://exo-blog.blogspot.com/2009/09/ie-market-share-holding-steady-in.html 
  
Could be a good, in depth, research-y type of article, one that pokes holes in assumptions about IE’s decline, the nature of intranet web use, etc. 

Anyway, let me know if you think this is worth exploring…thanks! 

RCK

2. Galen’s First Reply:

From: Galen_Gruman@infoworld.com
Sent: Wednesday, February 10, 2010 10:51 AM
To: rck@xpnet.com
Subject: Re: Story Idea - Browser Trends
 
Hey Randy. 
 
You covered the IE aspect last fall in your IW blog, and I saw that CW did a story along these lines based on your exo blog. So I'm not sure there's a further story with that theme. 
 
I'd be more interested in what it takes for IT to move off of IE6. MS is encouraging them to do so, and Google and others are now saying they're dropping IE6 support. Of course, reworking
IE6-dependent internal apps means spending time and money, so IT has an incentive to not do anything for as long as it can avid doing so. Unless there's a forced requirement to shift (I can't imagine companies saying users MUST use IE6 for legacy stuff and Firefox for Google and other providers' services, and I don't believe you can run or even install multiple versions of IE at the same time in Windows). Or maybe the shift away from IE6 is not as hard as IT may fear. What does it take? What could force IT to pull the trigger? 
 
FYI I've asked Paul Krill to come up with a regular news story based on the exo data, much like how Gregg Keizer does the regular stories based on NetApplications data. It makes sense for IW to do the market monitoring stories from a business-oriented data pool, and that would also create news stories that are syndicated that highlight the exo system. I'm not sure if Paul should or can do something as predictable as Gregg always (he always does a Web browser share story and an OS share story), so thoughts welcome on that. Of course, your data is available all the time, so monthly is arbitrary; NetApp releases the data monthly, giving Gregg a predictable but inflexible schedule.


--
Galen Gruman
Executive Editor, InfoWorld
galen_gruman@infoworld.com
(415) 978-3204
501 Second St., San Francisco, CA 94107

3. My Reply to Galen:

From: Randall C. Kennedy [rck@xpnet.com]
Sent: Wednesday, February 10, 2010 2:44 PM
To: 'Galen_Gruman@infoworld.com'
Subject: RE: Story Idea - Browser Trends

Galen,
 
  I have no problem with Paul Krill or anyone else writing-up my usage data (which took some very clever programming, not to mention some extensive data shaping/massaging, to generate). However, I’d appreciate it if he could follow
Gregg’s lead
and reference/link to the exo.blog and xpnet.com as the source of the data (i.e “Researcher’s from our affiliate, the exo.performance.network, have discovered…etc.”). This will help me to further establish xpnet.com as a distinct research brand and to begin to create a parallel distribution mechanism for my original content – one I can monetize through sponsorships, etc. It’s important that I raise the public profile of xpnet.com, and that means currying the attention of every beat writer, pundit or analyst I can find, both inside and outside of IDG. So if Gregg, or Paul – or even Ed Bott or Ina Fried – want to quote my stuff, I’m thrilled.
 
  Fortunately, with InfoWorld mired in a “race to the bottom” (as Doug calls it) of the tabloid journalism barrel, there really should be little overlap between the kind of hard, data-driven research I’ll be publishing through my own blog and the “push their buttons again” format that seems to best fit InfoWorld’s mad grab for page views (quality be damned). Call it synergy. :)
 
RCK

4. Galen’s Second Reply:

From: Galen_Gruman@infoworld.com
Sent: Wednesday, February 10, 2010 11:13 PM
To: rck@xpnet.com
Subject: Re: Story Idea - Browser Trends

That was the idea.
 
And I take exception to the "race to the bottom" comment. Doug of all people should know that's not true. Otherwise, there would be no Test Center, never mind the massive investment it takes relative to its page views. That's a quality play. And even our populist features are good quality, not throw-away commentary.
 
You can push buttons based on real issues and concerns -- your Windows 7 efforts in 2008-09 showed that -- not on Glenn Beck-style crap.
 
If we wanted to go the "race to the bottom" approach, believe me, there'd be no Test Center, no features, and no blogs like yours, mine, McAllister's, Prigge's, Venezia's, Bruzzese's, Grimes', Heller's, Samson's, Rodrigues's, Lewis's, Marshall's, Linthicum's, Tynan-Wood's, Knorr's, Babb's, or Snyder's -- oh, wait, that's 90% of what we publish. Most of the other 10% --
Cringely and Anonymous -- are populist but hardly low-brow. That leaves the weekend slideshows from our sister pubs in the "race to the bottom" category. 
-- 
 
Galen Gruman
Executive Editor, InfoWorld.com
galen_gruman@infoworld.com
(415) 978-3204
501 Second St., San Francisco, CA 94107

 


So far, I’ve demonstrated that Galen Gruman was intimately aware of the nature and structure of Devil Mountain Software, Inc., the exo.blog and my relationship to both. However, he wasn’t alone. Editor in Chief Eric Knorr was equally familiar with DMS and all aspects of my research publishing activities outside the IDG fold. Which is why, when he heard of the controversy over the infamous Windows 7 “memory hog” article at CW – an article that quoted Craig Barth extensively and made absolutely zero mention of Randall C. Kennedy – his first reaction was to send me the following email:

From: eric_knorr@infoworld.com
Sent: Friday, February 19, 2010 11:43 AM
To: randall_kennedy@infoworld.com; Galen_Gruman@infoworld.com;
Doug_Dineley@infoworld.com


Subject: seen this?

 

http://blogs.zdnet.com/hardware/?p=7389%20&tag=content;wrapper

 

Is it war?

All of which begs the question: If Eric was as ignorant of the Craig Barth ruse as he claims to be, why would his first reaction to reading a story about some controversy involving Craig Barth and the exo.performance.network be to contact me?

The answer is simple: Because he knew. Just as Galen Gruman knew. They both were complicit in perpetuating the myth of the Craig Barth persona. And since they represent two-thirds of the senior editorial brain trust at InfoWorld (Doug Dineley was never clued in), this means that, for all intents and purposes, the publication known as InfoWorld was directly supporting my efforts to obscure my relationship to the exo.blog - the very same exo.blog they both keep commenting on in the above email exchanges - by allowing me to put forth a fictitious character as the CTO of DMS.


And here, to address the inevitable conspiracy theorist challenges to the validity of the above email record, is the header info for this particular message. The message was sent as a response to me, Randall C. Kennedy (rck@xpnet.com) from Galen Gruman (Galen_Gruman@infoworld.com) on February 10th, 2010.

Return-Path: <Galen_Gruman@infoworld.com>
Delivered-To: rck@1878842.2041975
Received: (qmail 21591 invoked by uid 78); 10 Feb 2010 18:12:51 -0000
Received: from unknown (HELO cloudmark1) (10.49.16.95)
  by 0 with SMTP; 10 Feb 2010 18:12:51 -0000
Return-Path: <Galen_Gruman@infoworld.com>
Received: from [66.186.113.124] ([66.186.113.124:49966] helo=usmaedg01.idgone.int)
by cm-mr20 (envelope-from <Galen_Gruman@infoworld.com>)
(ecelerity 2.2.2.41 r(31179/31189)) with ESMTP
id A4/73-22117-327F27B4; Wed, 10 Feb 2010 13:12:51 -0500
Received: from usmahub02.idgone.int (172.25.1.24) by usmaedg01.idgone.int
(172.16.10.124) with Microsoft SMTP Server (TLS) id 8.1.393.1; Wed, 10 Feb
2010 13:11:52 -0500
Received: from USMACCR01.idgone.int ([172.25.1.25]) by usmahub02.idgone.int
([172.25.1.24]) with mapi; Wed, 10 Feb 2010 13:11:57 -0500
From: <Galen_Gruman@infoworld.com>
To: <rck@xpnet.com>
Date: Wed, 10 Feb 2010 13:12:45 -0500
Subject: Re: Story Idea - Browser Trends
Thread-Topic: Story Idea - Browser Trends
Thread-Index: AQH7Zdt+Sb1cTPao5c3kdPVIphPlggLM2MUMAeHPlZWROLhPpw==
Message-ID: <C798371E.20D5A%galen_gruman@infoworld.com>
In-Reply-To: <003e01caaa35$a0b97de0$e22c79a0$@xpnet.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-Entourage/13.3.0.091002
acceptlanguage: en-US
Content-Type: multipart/alternative;
boundary="_000_C798371E20D5Agalengrumaninfoworldcom_"
MIME-Version: 1.0

Enjoy the carnage!

RCK


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...

Wednesday, March 10, 2010

(Trends): Windows 7 Doubles Market Share in Q1

Windows 7 is on a tear. The latest statistics from the exo.repository show Microsoft’s newest client OS installed across 11% of participating systems. That’s over double the percentage that reported running Windows 7 at the beginning of the year, and the fastest penetration of any Microsoft desktop OS on record.

Figure 1 - Community OS Share Today

The big loser in this sea change is Windows XP, which saw its usage share drop by over 3 percentage points. Windows 7’s immediate predecessor, Windows Vista, took a similar hit, dropping nearly 3 percentage points in Q1, 2010. Over the past six months, Vista has dropped nearly five percentage points, again mostly due to heavy adoption of Windows 7.

Figure 2 - Community OS Share on 1/4/2010

That Microsoft’s new OS could make these kinds of inroads, in such a short period of time, is testament to product’s popularity. If this trend continues, Windows 7 will easily overtake Windows Vista as the second most widely installed desktop OS by the end of 2010, and could potentially displace Windows XP atop the OS usage share heap by sometime in early 2012.

Since this will also be the timeframe in which Microsoft begins ramping up the “Windows 8” hype machine, it will be interesting to see if the company ends up facing a similar situation to the one it faced with XP vs. Vista: A wildly popular OS that everyone is satisfied with and that nobody feels any urgent need to upgrade from. When that happens, the wheels may once again come off the Windows upgrade treadmill, forcing Microsoft to find even more creative ways to convince its customers that, while Windows 7 is great, they really do need the “next big thing.”

Note: The above statistics were generated from the over 240 million system metrics records collected from the over 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object (for free) on your own site or weblog, please visit our web site: www.xpnet.com.

Read more...

Monday, March 8, 2010

(Editorial) When Microsoft Breaks Windows

Life as a Windows developer has its ups and downs. On the positive side, you’re associated with the most popular computing platform in history, which translates into lots of potential clients. But this also means you’re subject to the design whims of a notoriously proprietary software company. And as often as not, these changes come back to bite you in the most unusual places.

Take my most recent case: Our largest commercial client, a financial services firm, asked us to modify our DMS Clarity Tracker agent to collect the GDI Object count for each running process. FYI, Tracker already collects a variety of process metrics, including critical CPU utilization and memory counters. However, since this client had some bad experiences with GDI Object handle leaks in the past, they were eager to see this metric added to our collection pool. And since this firm is our best commercial customer, with thousands of seats licensed across one of their largest divisions, we were eager to assist them.

And thus began my odyssey into the wonderful world of forgotten Win32 APIs. It began when I started researching how to collect the GDI Object counter value. Since it’s not part of the regular process object in Performance Monitor, I was forced to step outside our normal methodology (i.e. use PDH.DLL to create a handle to the desired perfmon counter) and look at alternatives.

The first (and really, the only practical) suggestion I encountered was to use the Win32 API’s GetGuiResources function to read the value directly from the target process in memory. However, since our current agent architecture requires sampling the surrounding environment once every second (and then averaging the collected values every 15 seconds), I was understandably concerned about overhead. The idea of executing multiple (50-90 or more, depending on the task list) OpenProcess and GetGuiResources calls in quick succession, every second, gave me pause. After all, these calls aren’t necessarily optimized for low-overhead, like the aforementioned PDH calls are, and I thought I might have to back off on the granularity and simply collect the values once every 15 seconds as an instant value.

Fortunately, the APIs proved to be quite lightweight, and I was able to quickly construct a routine that paralleled our normal PDH collection method, but calling the OpenProcess and GetGuiResources functions instead of the various PdhQuery functions we did for the other, PDH-based counters. The net result was an elegant solution that grabbed the data we needed and integrated it seamlessly with our existing collection model.

And more importantly, it worked – at least at first. Running from within the Visual Studio IDE, our new GDI Object collection logic functioned flawlessly. However, when we took the agent out of the IDE and compiled it to run in its native environment – as a service executing under the LocalSystem account – the GDI Object logic broke down. Instead of getting the desired count values, the GetGuiResources function returned zeros for nearly every process.

I say nearly every process because, for a handful of tasks – most notably, those running as services and which also consumed GDI Objects (i.e. not very many – GDI is mostly for interactive apps) – the function returned what seemed to be valid data. Worse still, the collection code worked perfectly under Windows XP, both interactively and as a service. It only broke down when we deployed the agent under Vista or Windows 7, and then only if we ran the agent as a service under the LocalSystem account.

I didn’t know it at the time, but I was about start down a slippery slope into Win32 API debugging hell. My first theory was that it was a permissions issue. My OpenProcess calls must have been failing due to Vista/7’s tighter IPC security. However, a check of the LastError value showed no faults. And when I subsequently tested to see if I could read other, non-GDI metrics from the process – for example, using GetProcessMemoryInfo to read its Working Set counter – my call’s succeeded each time, using the same handle that was failing with GetGuiResources.

I could even terminate the target process – running as LocalSystem gave me free reign over the system. However, no matter what I tried, I could not get GetGuiResources to return valid data. Another check of LastError, this time for the GetGuiResources call itself, left me even more confused. It reported a result code of INVALID PARAMETER, which made no sense since the only two parameters that the function accepts were the (now confirmed valid) process handle and the requested resource type (GDI or User object count). It was a real hair-pulling moment.

Eventually, I tried enough variations of the above methodology that a pattern began to emerge from the madness. For example, if I ran the code interactively on the desktop, it would dutifully record the GDI Object counts for all of the interactive tasks (e.g. explorer.exe and whatever else was running on the task bar or in the system tray). And when I ran the code as a service – either under the LocalSystem account or using an Administrator-level user account – it would record GDI Object count values only for tasks that were running as non-interactive services.

It was then that the light bulb finally came on. I remembered reading how Vista (and Windows 7) tighten security by moving all interactive user tasks into a second console session (Session 1) and away from the primary console session (Session 0), which was now dedicated solely to non-interactive services. The idea was to eliminate the kind of backdoor vector that led to the infamous “shatter attack” exploit under Windows XP. By isolating service processes to a separate console session, and prohibiting them from interacting with the user’s desktop (which was now running in a different console session), they could suppress such attacks and reduce Windows’ exposed surface area.

Of course, such a radical change introduced some notable compatibility issue. For starters, services that relied on the “Allow service to interact with desktop option” were immediately cut-off from the user’s they were trying to interact with. And, apparently, the move to a dedicated services Session 0 also had the effect of breaking the GetGuiResources API call when executing across session boundaries. So while my agent service running in Session 0 could attach to the processes of, and read data from, tasks running in Session 1 (or any other user session), any attempt to read the GDI Object counter data off of these processes failed – ostensibly because the User and GDI resources these tasks rely on exist solely inside of the separate, isolated user session.

At least that’s my theory so far. The truth is that I’m not sure what the real problem is. While the above analysis seems to fit the facts, there is a dearth of information on the subject. Google searches for “GetGuiResources error” return lots of references to permissions issues and other false leads, but nothing about the call failing across session boundaries.

Fortunately for me, my financial services client is still running Windows XP. They have no plans to move to Windows 7 for at least another year, in large part due to the myriad undocumented incompatibilities they will have to mitigate – like the one I’ve outlined above.

Perhaps I’ll find a workaround some day and finally get Tracker’s GDI Object count logic working under Vista and Windows 7 (I’m open to suggestions – leave your ideas in the Comments section of this post). But regardless, this whole affair was a learning process for me. I gained some valuable new skills, and mastered a few unfamiliar techniques – all part of my quest to quash one very mysterious bug.

So count this as one instance in which a developer took one of those classic Microsoft lemons (i.e. the company breaking Windows in a way that’s both unobvious and difficult for 3rd parties to trace) and turned it into lemonade.

Cheers!

RCK


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...

(WCPI): Windows 7 = Moore’s Law In Action

Want to know what separates Windows 7 users from their XP-loving contemporaries? Try 342 additional execution threads chewing-up an extra 711MB of RAM while spending over 2/3 more time running in Windows’ privileged “kernel mode.”

At least that’s what the latest Windows telemetry data uploaded to the exo.repository seems to indicate. By analyzing the approximately 1 to 1.7 million process metrics records, per system, collected each week from our community of over 24,000 registered users, we were able to create a profile of each user type and the composition of their typical application workloads.

For example, the average total process private bytes for systems in our pool of over 13,000 Windows XP users comes in at just over 564MB. Meanwhile, this same metric measure across systems from our nearly 5,000 Windows 7 users shows an average of just under 1.28GB (Vista comes in at just under 1.30GB).

A similar trend emerges when you look at the number of concurrent execution threads running on each system type. Windows XP users typically have anywhere from 40-60 concurrent, running processes spawning an average of 653 execution threads in total. By contrast, Windows 7 users typically have from 50-80 concurrent processes, and these, in turn, spawn an average of 995 execution threads. Vista systems report a similar process count, but with a higher total thread count (1013)across these processes.

Win XP

Vista

Win 7
Private Bytes 564MB 1295MB 1275MB
% Privileged Time 8.4 15.2 14.3
Thread Count 653 1013 995

Another interesting metric, % Privileged Time, shows that process threads under Windows 7 spend, on average, 70% more time running in kernel mode than threads running under Windows XP (for Vista, this delta is 81%). This shift towards privileged execution could have several causes:

  1. Newer applications, like Office 2007/2010 or Internet Explorer 8.0, spending more time executing in kernel model.
  2. New Windows services not present in XP interacting with the kernel and other low-level code, like device drivers.
  3. A more complex kernel, which itself spins an additional 40-50 execution threads in a default installation, taking more time to process kernel-mode requests.

Regardless of the cause, the fact that threads under Windows 7 are spending more time executing in kernel mode can have a direct impact on system performance. On the positive side, code that is executing in kernel mode tends to run faster since it doesn’t need to repeatedly transition into kernel mode to accomplish portions of its work (it’s already there).

However, code running in kernel mode is also inherently more difficult to multitask – it is essentially in control of the system and, in the case of a device driver’s Interrupt Service Routing (ISR), cannot be interrupted. Thus, more time spent in kernel mode often translates into reduced overall system performance, a scenario which typically can only be mitigated through the introduction of a more powerful CPU.

Bottom Line: Windows 7 users place significantly higher demands on PC hardware than users of its popular predecessor, Windows XP. Their workloads are typically larger (Private Bytes), more complex (Thread Count) and spend a greater amount of time running in kernel mode (% Privileged Time).

Fortunately, this additional computational overhead is mitigated by the fact that most Windows 7 users are running the OS on the latest generation of PCs, and this additional computing “horsepower” allows the OS to deliver new functionality and services while remaining responsive to user requests.

In other words, Windows 7 = Moore’s Law in action.

Note: The above statistics were generated from the over 14 billion process metrics records collected from the over 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object (for free) on your own site or weblog, please visit our web site: www.xpnet.com.


Figure 1 – Get This and Similar Charts at www.xpnet.com

Read more...

Thursday, March 4, 2010

(Trends): What’s Your Favorite Alternate Browser?

One of the things about the exo.repository that we most like to demonstrate is how it allows us to extract market trending data points that nobody else has. Whether it’s the rate at which internal IE organizations are shedding IE 6.0 (hint: it’s a stampede), or how a new version of Windows is driving base RAM configurations through the roof, we love it when we come out with something truly unique.

Such is the case with one of our newer charting products, “% Users Running IE + Other.” By combing the process records of our 24,000+ registered exo.performance.network users, we can determine not only which browser they use most often (still, sadly, Internet Explorer), but also which 3rd party web browsers they use in addition to Microsoft’s ubiquitous IE.

This is especially useful when trying to reconcile the often conflicting reports of decreasing IE market share on the public web with those of IT organizations “clinging” to Microsoft’s browser out of a need to support legacy in-house web applications. But when you factor in real-world telemetry data from the exo.repository, the picture becomes much clearer.

Figure 1 – What’s Your Favorite Alternate Web Browser?

Yes, it’s true that IT organizations continue to rely on IE extensively within their enterprises, a fact born about by the over 80% of systems we monitor which show IE in use for several hours each day. However, this seemingly disproportionate number begins to make sense when you consider that these same IE users are also regularly running at least one 3rd party web browser – most typically, Firefox or Google Chrome. In fact, upwards of 31% the systems sampled which run IE also run Firefox, while better than 18% of sampled systems run Google Chrome (in addition to Internet Explorer).

Clearly, there’s more going on with web browser market/usage share than has been reported by the mainstream IT press – and that’s because the media have traditionally relied on public-facing monitoring solutions, like NetApplications, to provide them with market share numbers. But the public web is only part of the picture. What we’re providing here, at the exo.performance.network, is a different perspective: A unique (aggregated/anonymized) look inside the thousands of Windows-based systems that contribute to our growing community of IT professionals and organizations.

And, of course, we do it all for free – our way of giving back to a Windows IT community that has provided us with the support and resources we needed to construct the exo.performance.network in the first place.

Note: The above statistics were generated from the over 14 billion process metrics records collected from the over 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object (for free) on your own site or weblog, please visit our web site: www.xpnet.com.

Read more...

Tuesday, March 2, 2010

(Editorial): 24,000+ Users and Counting!

We’re thrilled to announce that we’ve just passed the 24,000 registered user level. Thanks to an influx of new contributors from Poland (Viva Polska!), the exo.performance.network went over that heady mark earlier today, bringing our total repository statistics to a new all-time high:

  • Over 240 million system metrics records – per week!

  • Over 14 billion process metrics records – per week!

  • A huge cross-section of OS versions, including Windows 2000, Windows XP, Windows Vista and Windows 7 – along with a mix of Server 2000, 2003, 2008 and 2008 R2 systems.

  • The industry’s only composite index of Windows system performance: WCPI.

  • A growing library of Windows market composition and trending charts – 31 so far, with more on the way!.

We’d like to thank our users for their continued support during recent days. Our steady growth is irrefutable evidence of the tremendous value we deliver every day through our free tools and services. A special thanks to everyone who emailed us with words of encouragement – we greatly appreciate your support and promise to carry on delivering the best analysis and insight in the business.

RCK

Read more...

(Trends): 1 in 4 Users Running Google Chrome

Google Chrome is on fire. The latest snapshot from the exo.repository shows the nascent web browser running on nearly one out of every four (24.88%) PCs monitored by the exo.performance.network. This represents a 2 percentage point jump in a single week, and a nearly 7 percentage point jump since the beginning of the year.

Figure 1 – Web Browser Usage Share

Meanwhile, Internet Explorer use remains strong, with better than four out of five (80.56%) exo users running some version of IE during the course of the day. And despite headlines decrying the continued use of IE 6.0 in the enterprise, exo.performance.network contributors continue to walk on the cutting edge. Fully 85% of exo monitored Vista users are running Microsoft’s latest version, Internet Explorer 8.0, while nearly 72% of exo monitored Windows XP users are running the new browser.

Figure 2 – IE Versions – % of Users of Each Platform

Suddenly, organizations like Intel – with its slow burn adoption of new Windows and IE versions – look even more out of touch with the mainstream. And with Google and others cutting-off access to their sites for IE 6.0 users, our community of IT professionals and forward looking organizations is in excellent position to weather the storm.

Note: The above statistics were generated from the over 13 billion process records collected from the over 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object on your own site or blog, please visit www.xpnet.com.

Read more...

Friday, February 26, 2010

(Editorial): Picking Apart Intel’s Latest Windows 7 Migration Delay

I read with some amusement the recent account by an Intel IT engineer of how the company has been forced to repeatedly delay its migration away from Windows XP due to concerns for, among other things, Internet Explorer 6.0 add-on compatibility and support for applications that still use 16-bit code in places where, quite frankly, they shouldn’t.

It’s the latest in a long line of “public displays of procrastination” that helped fire the imaginations of the very publications I once contributed to. In fact, this is exactly the sort of excuse-baiting exercise that led to the creation of the controversial Save XP Campaign at InfoWorld.

In case you’ve been living under a rock for the past few years, Save XP was a program that Executive Editor Galen Gruman dreamed up on his own and then forever associated with me by launching it, without my consent, from within my InfoWorld blog (it literally just appeared there one day – like magic).

In other words, it’s “shock jock” bait heaven, and the kind of story I might have seized upon for the Enterprise Desktop. But here, at the exo.blog, I’m free to give my true and honest opinion of this kind of corporate soul baring. And in Intel’s case, I call “BS” on many of their excuses for delaying migration – both to Vista, which they skipped altogether, and Windows 7, which they’re just now getting around to addressing in earnest.

For starters, there’s the IE 6.0 issue. Intel claims to have hung on to IE 6.0 for this long because of a need to support certain important internal applications as well as legacy addons, including, apparently, some older version of the Java runtime. But Intel has been ignoring an important potential solution to this quandary: App-V.

The App-V runtime was purpose-built to address just this sort of scenario. It isolates file system and Registry changes made by application installers, allowing you to run multiple versions of a program, like Internet Explorer, side-by-side on the same PC.

It would be trivial to create an App-V sequenced package that encapsulated IE 6.0 (plus all of the required addons) and then roll this out as a short term fix. They would then be free to either upgrade their OS installed base or, barring that, to at least update the version of Internet Explorer to somewhere north of the paleozoic.

And what about their 16-bit applications? Intel claims that they need to maintain a wide variety of legacy operating systems, ostensibly for testing and verification purposes (this is never made clear in the original blog post). However, why this should affect their mainstream desktop computing stack, and its transition to 64-bits, is hard to fathom.

Note: Do they really want us to believe that some parts of Intel still rely heavily on 16-bit Windows or DOS code from the pre-XP era? For line-of-business functions that affect a significant portion of their user base? Really? Because, otherwise, this line of reasoning just doesn’t hold water – even when you factor in the occasional testing and/or legacy validation requirements.

The Intel engineer-author hinted that their solution to this problem will involve some sort of integrated VM solution, like Virtual Windows XP Mode (or more likely, MED-V). However, what struck me most after reading this posting is how the potential mitigation of these issues has virtually nothing to do with any specific Windows 7 capability or advantage. Vista had similar issues, and the same proposed “fixes” (App-V, MED-V, et al) could apply equally to either version.

In fact, this whole Intel blog entry smells like so much ass covering from a company that very publicly trashed Vista by skipping the upgrade cycle altogether. That controversial move, which was widely reported at the time, helped fuel a public perception backlash that cost Microsoft millions of dollars in potential revenue.

Now Intel is trying to make amends by claiming that everything’s just peachy under Windows 7, when in reality the very same compatibility hurdles – from IE to 16-bit code and even UAC - remain. Frankly, the author could have saved himself a lot of time and effort by skipping the play-by-play recap and saying something more to the point, like:

“Hey Microsoft customer base: We screwed-up by dissing Vista, and it cost our very best buddies wads of cash. Please disregard everything we said before about compatibility hurdles and migration issues and go buy lots and lots of Windows 7 licenses. Because we really do like this one. Honest! It’s better than Vista. Trust us!”

Of course, Windows 7 is better than Vista – just not in the ways that Intel is alluding to in this semi-confessional blog post. But at least they’re finally making the long overdue move away from Windows XP. And for that, I applaud them.

Because, at the end of the day, they’re still just a bunch of hardware guys. And as any hardcore software person will attest, when it comes to figuring out what to do with all those CPU cores and gigahertz, the hardcore hardware guys really don’t have a clue.

RCK


Get more charts like these for free at www.xpnet.com.

Read more...

Thursday, February 25, 2010

(Editorial): App-V Takes Virtualization Mainstream

Those who have followed me on InfoWorld and elsewhere know that I’m a big fan of application virtualization. The idea of bottling-up all of the messy deposits from a typical Windows application installation into an easy-to-deploy, self-contained package has always seemed like a good idea to me. And during my extensive testing of various “appvirt” solutions, I’ve developed some strong opinions about which approaches work best for various deployment scenarios.

For example, in a tightly-managed, generally homogenous Windows environment – with Active Directory at the core of every network – Microsoft’s own App-V solution has often seemed like the best option. However, in less locked-down environments, where portability and flexibility are the primary concerns, stand-alone (i.e. no client agent required) solutions, like VMware ThinApp or XenoCode, have always been at the top of my recommendation list.

I summarized my findings in a white paper that I published through the exo.blog in early 2009. This report, the development of which was funded by VMware, shows how tricky it can be to determine which virtualization platforms provide the best performance across a range of use cases. You can grab a copy of the white paper here.

Now, with the release of App-V 4.6, Microsoft has raised the bar a bit for its competitors. For starters, the new version allows you to sequence (i.e. capture the output from and virtualize the installation of) 64-bit Windows applications. This is significant in that Microsoft’s upcoming Office 2010 will be available in a 64-bit format, and App-V using shops will no doubt want to be able to virtualize it as they do the 32-bit version of Office 2007 now. However, the more important new feature is the capability to deploy virtualized applications to clients running the 64-bit versions of Vista and Windows 7.

Previous versions of App-V were incompatible with 64-bit Windows due to their lack of an x64-compatible kernel mode agent. This is one of the reasons why I’ve traditionally recommended VMware ThinApp for customers with a significant installed base of 64-bit clients. However, while ThinApp-encoded applications will indeed run on 64-bit windows, the virtualization engine itself is 32-bit only. You can’t encode a 64-bit application with ThinApp, and 32-bit encoded applications are treated like any other Win32 application running atop the WOW (Win32 on Win64) compatibility layer.

With both native 64-bit application support and the capability to be deployed on 64-bit Windows editions, App-V has pulled ahead of the competition and established Microsoft as the technology leader for this category. I’ll be revisiting my original test results in the coming days as I see what, if any, improvements Microsoft has made in the performance and overall runtime footprint of their solution. Stay tuned.

RCK


Figure 1 - The Latest WCPI Index Values

Read more...

(Trends): Windows 7 Drives RAM Size Surge

The latest data from the exo.repository shows Windows 7 driving a measurable surge in average RAM configurations across the nearly 24,000 registered xpnet.com contributors. According to repository snapshots taken in the weeks following the Windows 7 launch, the average RAM configuration for PCs running Microsoft’s newest OS has increased from 3.15GB on November 30th, 2009, to 3.76GB on February 25th, 2010 – a surge of nearly 17%.

.

Figure 1 – Average RAM Sizes – 11/30/2009

By contrast, average RAM sizes for PCs running Microsoft’s Windows Vista and XP have remained flat at 2.7GB and 1.7GB, respectively.

Figure 2 – Average RAM Sizes – 2/25/2010

The lack of movement on these legacy OS platforms reflects the rapid influx of Windows 7 PCs into the exo.repository. An analysis of the most recent 1000 exo.performance.network network registrants shows a phenomenal uptake in Windows 7 adoption, with 62% of newly registered PCs running Microsoft’s latest version vs. 28% running Windows XP and a meager 8% still running the much-maligned Windows Vista.

Figure 3 – OS Adoption Rates – Last 1000 Registrants

Bottom Line: Windows 7’s influence is increasingly being felt across the exo.repository, with nearly 2 out of every three newly registered systems running Microsoft’s latest and greatest. And along with this uptick in Windows 7 adoption comes an increase in the average RAM configuration for PCs participating in the exo.performance.network, and by extension, a significant cross-section of the general Windows system population. This is good news for software developers who have been waiting for average RAM configurations to increase before adding new, potentially memory-intensive features and capabilities to their application designs.

Note: The above statistics were generated from the over 230 million process records collected from the nearly 24,000 registered, active xpnet.com users. If you’d like more information about the exo.performance.network, including how to reproduce the above chart object(s) on your own site or blog, please visit www.xpnet.com.

Read more...