EARLIER THIS MONTH we wrote about Microsoft coming under threat of lawsuits due to these very same practices. The victim of benchmark fraud was IBM at the time and this time it is Oracle.
“Such benchmark fraud should be reported to the ASA for deceptive marketing.”They neglect to say that MSSQL server is 8 cores, whereas the Oracle server is 4 cores. Oiaohm adds that "HP-UX has the lowest benchmarks with Oracle. Solaris and Linux outscore it. Basically, Microsoft cheats on benchmarks at every chance. [...] Also thinking Oracle also runs on Windows. Benchmark was very incomplete. [...] Also lower clock speed processors."
"It's a really stacked config," adds the person who sent us this information. "Even with it not being HPUX, you are looking at 4 dual-core Opterons versus 4 single-core Itanium2 processors. Quite a big speed difference too."
To conclude, he adds: "The point was to show MSSQL was faster than Oracle. They want you to buy their database, not just the OS. It's just one more effort on Microsoft's part to spin bad data into a convincing glossy blurb to appeal to the C-levels I don't mind if they do a fair comparison and win, but this kind of stuff just hurts their credibility."
Such benchmark fraud should be reported to the ASA for deceptive marketing. This has happened before and the same should be done about "<vendor> recommends Vista" [1, 2] and other marketing schemes, maybe even "it's better with Windows" [1, 2].
Microsoft keeps wondering why it is not liked in IT circles. It is not because "it's a big company." ⬆
"Microsoft did sponsor the benchmark testing and the NT server was better tuned than the Linux one. Having said that, I must say that I still trust the Windows NT server would have outperformed the Linux one."
--Windows platform manager, Microsoft South Africa
Reference: Outrage at Microsoft’s independent, yet sponsored NT 4.0/Linux research
Comments
Jose_X
2009-07-01 04:47:09
Keep in mind this is a hypothetical exercise.
We have two pieces of hardware: A (ours) and B (theirs).
We have the corresponding platform software: for A (our platform sw) and for B (their platform sw).
We have the product being tested on each (in this case, it's their server software).
The first step is to make sure we find an A so that it outperforms their B hardware. This is easy to do unless B is the fastest supercomputer on record. It isn't, obviously, so we can definitely find an A that beats whatever B is. [Eg, a 4gighz x86 beats a 1gighz x86 from the same vendor.]
Each platform software performs about the same as the other under ordinary circumstances (or maybe ours is a bit worse). This means we will optimize extra for the occasion. This is easy to do by removing security and other tests. We can keep special task/process related memory objects around preinitialized in anticipation. We can simplify and speed up our scheduling. We can give the special process high priority to the CPU and to the filesystem (bypassing security checks, etc). We put everything else, including the GUI, into slow low priority mode. We turn kernel dynamic lists into static lists. Etc. Really, it is possible to optimize well for the occasion if we know the system will only be used for a specific purpose (to win in some benchmark). Also, the platform software we chose for their side is their generic platform software if possible (eg, their regular platform software not optimized for this benchmark).
So that is how we easily got the improved performance.
However, we need to control further context in order to pull off the coop. What about the price, right? After all, a supercomputer outperforms a pocket calculator, but people don't buy supercomputers to compute tax at the restaurant. The context in this case is that the supercomputer is a LOT MORE expensive. We need to get the price of our "supercomputer" down to a competitive level.
Here is how we carry out this step. We work with the hardware partner. They develop an exclusive model that they will price near cost. We also give away our platform software at near cost (it's a "special configuration" remember). Voila! We got our costs down because we and our partner have no intention to actually sell many of these models to actual customers.
So we kick their buttocks, and customers flock to our product.
Then...
The hardware model runs out quickly and a very slightly differently named/numbered hardware model is put in its place at a higher price.
Also, our platform software is changed back to normal, except that now, it actually doesn't run their server software all that well in comparison to our own server software that competes with theirs (but which was not tested in the benchmark). It's extremely easy to change platform software bits around so that one app that was favored is no longer favored and is actually handicapped. It's also very difficult to catch this if third parties don't have the source code. Also, for subtlety, this change in the platform can be achieved later on through one or more automatic online updates/patches.
Of course, the price of the platform software also goes up eventually, if not initially. Maybe its price goes up at the one year renewal or else when they exceed an artificially low user count. Or perhaps the price is raised transparently through the bundled software/service package "deal" the customer actually ended up buying. There are many ways to guide them into these higher priced options.
Profit.
Recap: We found better hardware, tweaked only our platform software to game the benchmark, and artificially lowered the price on this model in order to win the benchmark price comparison test. Then we switched this system with a regular one, threw in some more items, and modified the platform software (over time) to disfavor their application that we favored for the benchmark. Through this bait and switch we won the contract, and later by controlling the platform software, we disgraced their product to upsell our product in its place. We had the slightly worse software perhaps yet won and pulled in much more money than what they were advertising as their price tag. A full sleigh of hand.
This is dirty, absolutely. It's deceptive. It's anti-consumer and anti-competitive. It likely leverages monopolies later on in the upsell. It is perfectly within Microsoft's capabilities to pull off. It would be consistent with Microsoft's past behavior.
Keep in mind, however, that this was only a hypothetical exercise.
Jose_X
2009-07-01 04:50:16
What doesn't change is the story about deception.. which is also a story about trust.