Showing posts with label Anders Hejlsberg. Show all posts
Showing posts with label Anders Hejlsberg. Show all posts

Thursday, June 10, 2010

So Microsoft's experiments with Software Transactional Memory have ended, aye?



Wow... shocking...

A month or two ago, I was listening to an episode of the .NET Rocks Podcast where Anders Hejlsberg slagged Software Transactional Memory. "It's the gift that keeps on giving... in terms of complexity. The overhead is terrible also. We're looking at two-fold and four-fold increases in processing time even in the best case scenario."

For those who have no idea what I am talking about, STM was considered the silver-bullet for the problem of parallel programming. Guys like me have serious problems constructing highly-parallel applications. I can do some threats. I've tried P-Linq. I am even fooling around with the Task Parallel Library. However, none of this gets me (or anyone else) to the point where you can construct extremely parallel systems. This is a very dark and arcane art for super-geniuses.

STM was supposed to be the closest thing to a silver-bullet for this problem. In theory, it would have enabled a bunch of dudes like me to collaborate and build a seriously parallel system in pretty much the same way we have always coded. Supposedly the performance was exceptionally good also. I remember hearing Microsoft researchers at Cambridge England stating this very thing.

Well... no....

Anders says it performs like shit. He said that execution times increase 200-400% in the best case scenario. That is not good performance.

Evidently, Anders had the ammo to back it up. According to recent reports, Microsoft's experimentations with STM have ended, and there were no engineering results. That means Microsoft will not be introducing any new products based on STM technology, or incorporating STM into any existing product lines.

Damn... I am so disappointed...

I think there are two things we can carry away from this monumental moment:
  1. This is another case where the academic computer world has made huge promises--like artificial intelligence--and come up with shit in their hands.
  2. Witness the power of Anders Hejlsberg, tech geek in a basement, and his ability to shutdown Ph.D. minted Cambridge scientists.
Years ago, I decided not to major in computer science because the fuck-heads running the UCLA department of computer science did not know what the hell they were doing. I could recount their reaction to Visual Basic 1.0, but I have already blogged on that.

Of course, AI has been the perpetual waste of billions in research money. We have gotten little or nothing out of this research. The best we have done is a couple of medical "Expert Systems" written in Prolog that help doctors to diagnose really rare & difficult problem. According to many doctors, those tools aren't that good either.

Now we see the apparent demise of Software Transactional Memory (STM). I am so disappointed to learn that our boys in Cambridge were ivory-tower academics with their feet firmly planted in mid-air. I thought these guys were practical.

Do you remember what Dr. Stanz said to Dr. Venkman in Ghost Busters? "You don't know the private sector! I've been there. They actually expect results!"

Yeah, we expect results. Researchers work under ideal conditions, and they don't need to produce results. Engineers work under real conditions, and they better produce results. It looks like Cambridge's claims of outstanding performance were based on ideal-conditions, not real conditions.

Disgusting! I've been taken in by academic clowns! Bastards!

Second of all, this event speaks volumes about Anders's power as a good 'ole tech geek. Anders and Bill Gates have similar origins. Anders was the better looking guy, and the smarter guy, but Bill was more ambitious and a better businessman. Both of them were tech geeks in a basement who never really finished their academic degrees. Now these guys arguably have more power over the real software world than just about anybody else.

Wow man... The Cambridge guys had me so convinced... If I had heard it from any other source than Anders I would have discounted it as a crackpot remark. Anders has a ton of street-cred with me.

Saturday, May 29, 2010

Outlook: The object of ridicule among Microsoft programmers

I recently had the pleasure of catching up with some of my favorite Podcasts. These include such stalwarts as .NET Rocks, Hansel Minutes, the Polymorphic Podcast, and Java Posse... among others.

I was impressed by the number of Outlook jokes these programmers were cracking. What is an Outlook joke? Any joke which sticks a thumb in the eye of the designers and programmers responsible for the Outlook application. The suggestion is always the same: They did an outrageously bad job on this application, they obviously don't know what hell they are doing, and they should be ashamed of themselves.

Example: Anders Hejlsberg, probably the greatest genius at Microsoft, is being interviewed about the future direction of the .NET programming platform. He is the man most responsible for setting the course in programing systems at Microsoft. Anders is explaining the sorts of strides that must be made with programming languages, techniques and tools to support the multicore present and future. Anders believe that the .NET development team must revisit the basic design of threads in .NET. When a user says something like var x = new Thread(); the CLR initializes such a threat with a 2mb logical address space. This would mean only 2048 threads in total would be possible in a 32 bit address space, and frankly you'll never get close before memory will become so congested that problems will strike.

Richard Campbell, cohost of .NET Rocks seizes the opportunity to strike: "But that would mean we can only run Microsoft Outlook on our machines with 4GB and then we are all done?!?" This brought howls of laughter. Anders did not really want to comment. It would be tough to venture a defense. Believe me, if there had been something to defend, he would have mounted a defense. Anders is extremely knowledgeable and fair minded. He loves to argue. The fact that he dropped the subject speaks volumes.

Anders motto is "I think we can do better." He is always seeking to "do better". Believe me, if Anders was tasked to run the Outlook group, he would kick a lot of asses right out the door. He might recommend the dreaded EOL logo also.

I heard several other jokes on the other Podcasts. Those would be more difficult to explain. Even the Java Posse, a group unrelated to Microsoft, were wondering aloud why Microsoft programmers continue to suffer Outlook when we hate it so. The answer is simple: Because bad men with MBAs continue to ram it down our throat. Believe me, we would burn it, if we could.

Folks, I want you to know that Outlook is a bad joke among Microsoft programmers. It is a byword meaning "everything done wrong" from the programmers point of view. It is a detestable, reprehensible, object of hissing and horror. None of us can understand why the bloody bastard has 32-46 active but idle threads open when you are doing nothing with the application. We still can't understand why Microsoft would ever have though that modules of Visual Basic behind your eMail (Active Mail) was anything but a virus writer's wet dream.

Folks, it goes deeper than that. All groupware fuctions can be performed in a web environment. No client-side tool is necessary. Microsoft has a fully-orbed web-based system themselves with Outlook Web Edition (OWE). They should have placed Outlook in the EOL (end of lifecycle) category and deprecated it immediately when this web edition hit the streets. Microsoft Live! has even better implementation for most of us. Google long-since crushed Outlook with their Gmail-based suite of scheduling and groupware tools. Mail and groupware is simply the perfect case-study for the "software as a service."

It is conceptually wrong to build a thick client, or maintain an existing thick client, for the purpose of doing mail, schedule, and group actions. If you go to Visual Studio and select File->New->WinForm Application, and begin to write any sort of thick client for eMail and groupware, you are instantly wrong. You have made a mistake. The mistake isn't that the niche is filled. The mistake is that there is no niche in the first place. These apps should not exist.

There are many pieces of evidence for this. I know many scores of Mozilla Firefox users. I know of no one in my circle who uses ThunderBird. Oh, what's that? You've never heard of ThunderBird? Excellent! That's precisely my point. Thunderbird is the Mozilla mail client.

When I ask Mozilla fans why they don't download and use ThunderBird for free, the answer is always the same: I can see no use for the application. Now that we have webmail, who needs a mail client on the machine? These individuals are thinking correctly. Their conclusions are well merited and valid.

Outlook is evil. Outlook should not be used. I am not the only Software Engineer who thinks so. If you are dependent upon Outlook, you are wrong, period. Get right. If you have built a business around Outlook, you have built your business poorly upon shifting sands. You are also wrong and should correct the situation as soon as possible.

Tuesday, May 12, 2009

The problem with C# 4.0


I hear many things about my favorite programming language these days.  Not all are favorable.  Some statements conflict.  There are a couple of facts about the latest edition (additions) to C# that cannot be denied.

1.  Anders has added in support for dynamic language interop.  This is mostly a matter of implementing a new static type whose static type is dynamic.  I guess that is the C# way to do it.  Basically, a declaration of 'dynamic X;' means we will obtain a boxed object with some metadata attached to it to let us know what type it really is.  This is almost like generics in Java.  The performance penalty is not too terrible at run time when all factors are considered.  Still, there will be a penalty.  This won't be high performance.  Worse, there is no question that this is full-bore late-binding.  In any late binding situation, you loose type safety.  No compiler can ever predict that any piece of late-bound code will ever work, or work in all scenarios.  The compiler will go along with it, presuming that you know what you are doing.  When the build is successful, do not presume that the compiler has endorsed your code.

2.  Anders has added support for optional parameters, so we can now have pain-free use of C# in connection with the dreadful, much-detested, out-dated, out-modded Microsoft Office object model.  Optional params are big in OLE2 or COM or ActiveX or whatever you want to call that disgusting old shit we used to do.  Lamentably, the dastardly Microsoft Office object model is still a filthy COM beast.  You should obtain less painful interops with all COM servers because of Anders most recent additions to C#4.  Still... this is a dirty thing to accommodate.  It is the programming equivalent of saying "have sex with HIV infected individuals, just use this condom".

3. There is also some stuff about co & contra-variance with greater safety.  I have never stubbed my toe on Co & Contra Variance, so I see no issues solved here.  I suppose this helps in a few marginal edge cases.  I know of none in my code.

4. Finally, Anders & co are re-writing the C# 4.0 compiler in C# itself, thus making the compiler available as just another managed code software services at run time.  The presumption is that C# 4.0 programmers will now use this handy-dandy, ever-ready compiler to engage in meta-programming techniques and tactics.  Once upon a time, this was the exclusive privilege of people coded with Interpreters.  Java programmers can only get access to this sort of dynamism through a jar library called Janino.

These last two items are of no consequence to guys like me.  Some may be jumping for joy, but I don't need a runtime compilers just now.  

Surprisingly, the announcement of these language features has tripped off a debate surrounding the question of whether C# is getting too big for it's britches.  Some say that C# is becoming an over-bloated Swiss Army knife.  I really think this debate misses the point entirely.  The issue is not one of whether the language is getting too big, but rather whether it is growing in the wrong direction.

Certainly, I do not welcome the advent of the dynamic type.  I don't need it, and I don't plan to use it.  I suspect that the mere presence of this type will slow the compiler considerably.  I hope it doesn't result in generally fatter CIL for all, which would in turn lead to fatter X86 code.

I am not a fan of optional parameters.  Optional parameters are element & aspect of a generally sloppy approach to programming.  That is why they are in VB.  They are not unlike the unstructured goto statement.  They are just bad for your code.  Either a param is necessary or it is not.  If it is necessary, it should always be required, and it should always be set.  Programmers should not be allowed to 'safely' ignore a parameter. This is one of the many ways in which unexpected and somewhat unpredictable behavior emerges in a system.  Programmers should be aware of the change in behavior flag settings will produce.  Of course, I realize that Optional Parameters were big in the days of COM.  This is one of several reasons why COM was a dirty, fifthly, nasty, ugly, disgusting, detestable, nasty, wretched, leaky, buggy, unstable, evil piece of shit.  [Did I mention that COM had no security model and was single handedly responsible for the Great Spyware Pestilence of 2003?]  There are explicit reasons why we flushed COM down the toilet in .NET.  One of the reasons optional params were rejected in C# is that we wanted to rid ourselves of this corruption.  Now here we go again.  Not good.

I understand why Anders reversed himself.  Many, many, many C# programmers have been bitching for years about how difficult it is to drive the Microsoft Office Model because C# does not support optional params.  Anders caved in under pressure from customers and the high command.  In a very real sense, I know this will make my life easier when I must do an Office automation project.  Still, I do not look forward to fully managed C# library, with no element of COM, which show optional params all over the place.  This eventuality will signal the arrival of sloppy coding techniques in the rather purist world of C#.

My real bitch is that the two language features we really need the most did not make it into C# 4.0.  What are those features?
1.  Traits, just like in Scala
2.  XML literals just like in VB.NET

In all honesty, these are the only two linguistic innovations I have seen in the past 6 ot 7 years that did not originate in the C# project.  

Traits are basically implemented Interfaces which you can tack on to almost any object.  They give you the advantages of interfaces with the advantages of mixins in Ruby.  The final yield is a language feature which gives you the reliability of single inheritance with some of the advantages of multiple-inheritance.

XML literals threw me for a complete loop.  XML literals took VB, a language which was dying IMHO, and suddenly made it the best language for the MVC framework. You don't know how much it hurts me to say this, but I would never want to code a helper class in MVC without the aid of VB.NET XML literals. This is not to mention all of the complex XML transforms which this system makes incredibly easy.  

XML literals are probably the greatest, most practical, most useful idea I have encountered in the past 5 years.  The biggest thing since Generics.  Scala also has a form of XML literals, but they don't seem backed by the sort of library power VB.NET offers.  In any case, C# should have had this feature.  Omitting XML literals from C# 4.0 is the same thing as capitulating the MVC framework to VB.NET.  Anders needs to take notice of this fact and make some corrections.

So the final verdict is clear.  C# is not guilty of getting too big.  C# is guilty of expanding in the wrong direction.  C# is guilt of not expanding in the right direction.  All-in-all I am quite disappointed.

Wednesday, May 6, 2009

Java programmers are just hog wild about AOP

Recently, a flaming post went up on the DZone.com website.  A Java fellow posted a blog in which he bluntly stated that he would much rather employ Java programmers than C# programmers.  The absolute point of the piece is that AOP is extremely popular in Javaland, and all Java programmers (young and old) are well versed in the paradigm.  Such cannot be said for the C# programmer.  C# programmers seem to think that AOP is for testing only.  We don't seem to understand the other mega-benefits of AOP.  For this reason you should hire Java guys, not C# guys

I've given this political statement consideration over the past several days, I am completely sure that this a further example of the ethnocentric and narrowly learned specialization of all programmers in our field.  You see, the Java guy clearly doesn't understand the deep strategy of C#.  Neither does he understand the deep strategy of Scala.  Neither does the C# programmer understand why AOP is totally essential to life in the Java ecosystem.  The C# programmer does not understand why a lack of events and delegates in Java would push the entire ecosystem towards AOP.

Let me tell you about it.

Since the dawn of time, the dream of programmers everywhere has been code reuse and modularity.  We want to stop writing the same code over and over again every time we start a project.  We want a nice library we can compose with.  Composition is a big fucking word in our profession.  It is the dream of the blue turtles.  We want to write just those little bits of app-specific logic our customers demand.  Composition and reuse are the absolute ideas driving most changes over the past 30 years of computer languages.

Simple OOPs is nothing more than an attempt to achieve composition and reuse.  In the days of Smalltalk, simple OOPS worked out fairly well.  Because of the weak typing system, simple objects did the trick fairly well.  It was easy to interchange BaseA object with ModifiedA object, or BaseA with BaseB, or ModifiedA with ModifiedB.  The weak and dynamic typing system just went along with it, and if it could work, the interpretor would make it work.  For this reason, composition was pretty dang easy in Smalltalk.  There was pretty good code reuse in Smalltalk.

The problem is that a lot of us hate interpretors and weak typing systems.  A lot of us out-right fear dynamic systems for their unpredictable behavior and their slowness.  Others hated the ugly and goofy looking interface of the Smalltalk virtual machine.  Ergo, Smalltalk was not widely adoped and it ultimately died.  It was always a niche thing which the Ubergeeks used.  It is now having an interesting undead afterlife in the form of Squeek, but Smalltalk is dead none the less.

In response to Smalltalk, a number of strongly typed and compiled OOPs languages were invented.  C++ and Object Pascal were two perfect examples of this movement.  The problem was that they were only half-assed OOPs languages.  They were also structured.  A lot of Dephi programmers programmed in flat structured Pascal and claimed to be object oriented.  Likewise, a lot of C++ programmers were really writing C code and claiming to be object oriented.  As you might imagine, this approach did little for Composition and Reuse.  Worse still, once the language is strongly typed, interchange of BaseA object with ModifiedA object, or BaseA with BaseB, or ModifiedA with ModifiedB is no longer possible.  The signature of a function or a method demands a specific type.

Next, Java came around with a simple language and simple OOPs and the notion that you should inject dependences in an AOP fashion.  In this approach, you pass everything by interface type or abstract class.  This loosens the type system enough so that interchange of BaseA object with ModifiedA object, or BaseA with BaseB, or ModifiedA with ModifiedB is possible... with in some limits.  IoC and DI were formalized in things like Spring, AspectJ, and Guice to make this even easier.  You get pretty good code reuse in this AOP approach to doing business.  Implicit within this success story is a clear cut admission that pure OOPs doesn't yeild composition and reuse when you are compiled and strongly typed.  You need to take an AOP approach, or you don't get composition and reuse at a good high level.  

At roughly the same time (a bit earlier) Delphi came around with the notion that you should have something greater than objects called components.  These should be wired together with delegates and events.  Delphi died.  Java won by default.  Delphi was reborn as .NET and C#.  Suddenly the battle was renewed.

Of all the things that have triggered the greatest number of flame-wars between Java and C# programmers over the past 8 years, a pure lack of understanding of the .NET Component is top of the list.  C# programmers know they have components.  Some are dimly aware of how they work.  Many of them have no ideal in the world that Java lacks components.  They don't understand that strategy and command patterns are built into the CLR as first class citizens.  Specifically, strategy=events and command=delegates.  [Some would rap my knuckles for making that hard fast association.  Bring it on.  You'll get knocked the fuck out.]  These first-class citizens of the .NET framework are the foundation of components and loosely coupled, modular, composable, reusable code framework that we enjoy in the .NET system.

Java programmers do not understand this.  Events, delegates, properties, full lexical closures, all these things work together to make AOP far less necessary in the .NET programmer's life than in the Java programmer's life.  Coversely, because you lack events, delegates, properties, full lexical closures in Java, you need AOP much more than we do.  We get good composition and reuse without AOP.  We get good simple code without AOP.  If you think your AOP code is clean, you should see our component code.  It is wired together and loosely coupled with Events and Delegates.

This is not to say that AOP is completely irrelevant to the C# programmer.  Many of us, especially me, are asking serious questions these days like:

1. How can we take advantage of some of the goodness of AOP?  
2. When should I select AOP rather than a component approach?  
3. What are the specific scenarios where AOP is preferable to writing a component?  
4. Can AOP improve certain approaches where components don't work so well?
5. If so how?

So far, there is a lot of debate about this topic among serious minded C# programmers.  We don't have a lot of clear-cut, indisputable design-wins and use-cases for AOP.  Logging application activies and errors is one winner.  Automatic unit testing of production code is another.  This is strong sign of the high value of components.  They only break down {with certainty} in a couple of scenarios.  Of course, we do not yet understand AOP as well as we will in 10 years time.  In that time we may have even more compelling use-cases for AOP.  Right now, we don't.

It should be noted in passing that several veteran polyglot programmers are rallying against AOP in the .NET world.  In particular, the gentileman scholar Ted Faison has written a super book called "Event-Based Programming:  Taking Events to the Limit".    I remember Ted well from my Delphi days.  He was a great Delphi programmer.  Like most of us, he moved with Anders to the .NET platform and C#.  I like him and respect him, so I read his book.

Although Mr. Faison doesn't say it bluntly, a careful consideration of what he says in this book boils down to the following: "All of you C# programers runing toward AOP are headed in the wrong direction.  Don't do AOP just because the Java guys do AOP.  Use the events and delegates in our systems to wire together components, and you will obtain better and looser coupling.  This will give you the best possible composition and reuse you can obtain today."  I reached a conclusion that mega-AOP, as it is practiced in Javaland, is a specific solution pattern, in a specific language, which lacks key attributes we have.  Ted Faison might not have actually said that bluntly in his book, but I achieved that realization from reading his book.

This conclusion becomes even more interesting when you consider the design goals of the Scala language.  For those who don't know about it, Scala is Java's replacement and successor. This is like Marcus Ulpius Nerva Traianus taking over for Marcus Cocceius Nerva.  It is the succession from a good emperor to a better one.  Martin Odersky is explicit in his design manifesto for Scala.  Two questions drive him:
1.  Can we provide better language support for component systems?  He wrote a nice paper and did a video about this subject.
2.  Can we find a perfect fusion of Object Oriented Programming and Functional Programming at the same time?

Dr. Odersky very delicately offers a tough critique of Java in Scala.  You don't actually have a delegate in Scala, but you don't need one.  You pass functions without any delegate machinery in any functional language.  You don't have interfaces in Scala, but you don't need them.  You are better served by Traits, which give you all of the advantages of Interfaces, Mixins, and Multiple inheretence.  You don't have to pass things by Trait or by abstract class in Scala.  Type Inference and fully-orbed Generics will get the job done for you.  This generic based polymorphism obviates the need for OOPs polymorphism.  All of this adds up to a system which can do serious components, composition and reuse.

But this Scala approach to the problem is not well understood by Java programmers.  This is where they are going to have a big problem adjusting to life in Scala.  As a proof consider this:  A Java programmer learning Scala posted up at the Scala-Lang.org site.  He asked the following question:  How do I use Guice in Scala?  Since he was a Java programmer who did not want to reinvent the wheel, the question is perfectly understandable.  One of the Scala team members on the site answered his question.  Paraphrasing, the team member answered that AOP is basically out of place in Scala.  You can use it.  It is not recommended.  It isn't the Scala way of doing things.  Scala uses components.  Scala achieves composition and reuse by loosely coupling components together.  You don't have to inject methods, objects or any dependency anymore.  Use a trait, or pass the function or class to the object.  The old problems are not a big deal anymore.  To get the juice out of Scala, you need to get the Guice out of Scala.

The Java guy seemed extremely disapointed that the Scala team did not appreciate the great value of AOP.  It was likewise, I am sure.