Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Wednesday, June 1, 2011

The problem with Sirus 1.1 and Janus 4.3

Before embarking on a fools errand it is usually wise to sit down and think seriously about if and why this is really necessary. If you are going to spill a lot of blood and treasure in a vainglorious adventure leading to Pyrrhic Victory, it is better not go in the first place.

Why should I attempt to write an entirely new Synastry engine from scratch? That is the question. Let me give you a set of answers, but let me give you the really big one up front.

The problem with Sirus and Janus is that they are conceptually wrong for this era. Both applications were designed explicitly to serve the needs of a professional astrologer providing information to one client. As such, you need knowledge level of a professional astrologer to really exploit the system.

More importantly, the design scope of the synastry engine is incredibly limited. It was designed to counsel one client about one or two prospective romantic entanglements. Beyond a couple of comparisons, comparison become incredibly difficult and convoluted.

Further, there is no such thing as a query engine in these software packages. It is not possible to go to a database and query the set of all Taurus sun-dates with romance score of 150 or higher versus a given chart. It not possible to write a query that says "give me all Capricorn birth dates where Mars conjuncts Venus in the Davison chart vs. this particular natal chart". Neither can you write a query that says give me the solution set of all Pisces birthdates where Mars conjuncts Venus and both the Ascendant and Sun fall into your 7th house.

Yet these are precisely the ways in which you identify prime candidates for love and marriage. So why didn't the software vendors build such an engine?

Because this just isn't the way professional astrologers work. The workflow of a professional astrologer is pretty simple. A woman walks in the door and says she is conflicted. Man X is good and Man Y is more exciting. She does not know which one is the right man for her. What should she do? In this situation, the professional astrologer types in three birth dates, prints a pair of reports, and then explains it to her.

Both Janus and Sirus are perfectly adapted to this workflow. There is nothing about the ordinary work-a-day astrologer's job that cannot be accomplished with these two apps. If I wanted to go into business right now, I could. I own legit licenses.

Unfortunately, this is not what I am interested in. The question that obsesses my mind is the calculation of the perfect birth date of the perfect mate. I already have a pretty good idea of what that is. For the record, the date is 3/12/1986.

However, I am not absolutely confident of their methodology. Further, I am fairly sure that a better methodology can be constructed. Understand that I hacked and bent and warped these applications to make them do what I wanted them to do. Programmer tricks, my friends.

I would rather design an engine that is entirely built around the idea of query investigation and detailed comparison of outstanding prospects. For this task, an entirely new database application would have to be devised. It would have to be based on a full SQL engine with some pivoting analytical powers. It would need to solve some really complex celestial mechanics problems.

Most important, it would have to be able to differentiate between positive and negative elements of sexual attraction. It would need to differentiate between an individual who be a great short term sexual affair, and someone who is a great long-term mate in life. Believe it or not, these distinctions are entirely possible. As I mentioned before, squares and oppositions are not good for long-term stability and copacetic life together. Sextiles and Trines are.

I am fairly certain I can build this engine. In fact, I already started. I started several weeks ago. I already have a very rudimentary scoring engine working. It is not very sophisticated. It does not yet do a Davison chart. It does not identify every possible aspect that produces attraction. It does not yet distinguish between positive and negative attraction types.

All of this will be addressed in good time.

The great advantage of this project is that I am writing a fully-object oriented engine, which is written in fully-modern managed code. This means it doesn't leak memory, and is safe for deployment to the web environment. It means that I can scale it out on a service backbone across hundreds of virtual servers in the cloud. This means that I can construct a Software as a Service (SAS) website that can make money. This also means that I can drive a nice smart phone app that will communicate with my SAS web services for processing information.

There are a lot of advantages to dumping C/C++ and Delphi in this day and age. Believe me, there was no greater lover of Borland's Delphi than I was. Still, the time has come to move on. Even Anders Hejlsberg moved on to .NET more than a decade ago.

Monday, August 17, 2009

Death to the Background Compiler

Of all the misbegotten ideas that Microsoft has hatched over the years, the worst of them all is the background compiler. It is worse than the system registry. It is worse than the notion of supporting ActiveX inside Internet Explorer. It is worse than Microsoft Bob. It might even be worse than the IBM PCjr Chicklett keyboard. It is a totally confounded, wretched, filthy, nasty, counter-productive, anti-quality idea.

The notion is that the C# or VB compiler should be running continuously in the background while you write code. It should be (according to this misbegotten theory of the world) giving the programmer continuous feedback about what he is doing and whether each stroke of the keyboard is correct or not. The idea is absolute rubbish because it does not allow the programmer to finish a single thought before declaring that errors exist in his code.

It is obtrusive and obstreperous as fuck when declaring compiler errors also. We're not talking about mere blue and red underlines below your code. Nope, it will pop up the full compiler results window bar (chewing a considerable amount of screen real estate) just so it can show you read dots declaring that the code will not compile due to this or that error.

My response is simple "Of course the code will not compile. I am still writing it. Now will you please fuck off and die?" Of course, if you are shitty developer, you may need the crutch of constant compiler feedback. You might not know the difference between right and wrong code. You may need the compiler to tell you the difference between right and wrong code. This is true because you do not know the language you are coding in. If you are a good developer, who knows his chosen language, and you like to refactor your code for performance, organization and clarity, the background compiler is the worst enemy you have ever encountered.

Cut just one method or variable or property to promote it or demote it up or down the chain of inheritance, and the background compiler will scream its fucking head off about compilation errors. My response is simple "Of course the code will not compile right now. I am in the middle of refactoring my code. Now will you please fuck off an die?"

Of course, if you are a shitty VB programmer, who never refactors code for any reason (Microsoft Mort as they call you in Redmond), you won't be bothered by this problem at all. You will probably wonder how you could ever get it right without the background compiler. You may never need to promote or demote members or methods due to the fact that you don't use inheritance in the first place. If you are thinking this thought, you just might be a shitty developer and not even realize it.

I would like to get my hands of the fucktard who came up with the notion of the background compiler. I would make Jack the Ripper look like the Church Lady. He would not survive the encounter. I would beat him to death, and not quickly either. I would make him feel that he is dying.

I have already argued with a few Microsoft Devs about this online. Their standard defense of the background compiler goes like this: If we didn't have the background compiler running the time we:

  1. Couldn't know the type of some variables if you are using type inference or automatic data coercion.
  2. Couldn't give you immediate visual feedback in a XAML design environment.
  3. Changes to other assemblies in the project would not be reflected immediately in client assemblies
  4. You would have to hit CTRL-SHIFT-B or F5 to figure out if your changes were good.

My answer to that is "I am perfectly willing to hit the F5 key. I am perfectly willing to wait for compiler feedback until I am finished with a group of changes. Let me hit the F5 key for compiler feedback when I want it. That is the way all good programmers work. I don't need continuous feedback training wheels. Type Inference is cool, but automatic data coercion is not. If you are using ADC, you are a shitty developer and don't yet realize it"

Microsoft need to give us the ability to opt out. We need a simple switch under the options tab that will allow us to shut off the background compiler. That's all we need. Just let us turn the stupid fucker off.

Monday, June 22, 2009

Visual Basic is destined to die

Visual Basic is going to die because there are five specific things that kill parallelism:

1. Shared, global, mutable state.
2. Side-effects all over the place
3. Mutable data in just about any scope.
4. Synchronous communication between threads
5. A highly imperative approach to coding.

All of these things are the bread and butter, the day and night, the warp and woof of life for a Visual Basic programmer. I challenge you to show me any sizable Visual Basic App currently in use in any department in any corporation that does not manifest all five of these elements. Element 4 is the only one that might possibly be missing. If so, it is only because the application is absolutely single threaded, with no split between interface and worker threads at all.

For these reasons, I think Visual Basic is going go the way of dBase and FoxPro. I am not alone. There are many within Microsoft who do not believe that Visual Basic can be saved. Better minds, and less biased minds, know the culture of Visual Basic will have to be violently altered to permit a wide deployment of parallel processing through this language community.

Bill himself, does not like notion dead VB in the EOL category. Bill may personally see to it that the system lives a little longer than it should.

How would Bill save Basic and do a really good job of it at the same time? The only way to achieve this is for Microsoft to (once again) make violent changes to the language and overall software pattern of the Basic language. This happened once before. When VB.Net hit the market, VB programmers almost became violent over the loss of COM objects (such as DAO and ADO) and the kind of performance degradation their favorite sloppy language constructs produced. They screamed their lungs out.

So Bill could order the construction of PB--Parallel Basic--that would not allow Global.bas files, enforce class encapsulation, push immutable data, and statelessness, but... Can you imagine how much more upset VB programmers will be if and when they were to discover than Microsoft had removed the Module, the Global.BAS, and the ability to float global variables, the ability to do functions without classes in PB? Can you image what would occur if every DIM statement produced a value identifier that immutable by default?

I believe the typical VB programmer would be livid to the point of heart-attack and stroke.

It might just be easier for Microsoft to place Visual Basic in the EOL category, and offer no further upgrades to this language. EOL means End of Life. They did it to FoxPro. They did it with their Fortran product (which was excellent). They did it to VBScript & ASP.

This brings us to the subject of Oslo and M. Already there is a theory that Oslo, also known as the M language, is being groomed by Microsoft as the declarative and thread-safe replacement for Visual Basic. According to the poop sheet, M is going to be an extremely parallel language system. Lots and lots of parallelism is going to happen under the hood whether you know it or not (as is the case in SQL). More will be availible if you simply learn & implement a few elementary patterns of development. It has to be seen whether a recalcitrant and stubbornly lazy VB community will even be willing to learn this new programming system.

At this point the Visual Basic programmer is probably screaming his head off "AS IF ANY OF THIS IS REALLY NECESSARY?? WHAT THE FUCK IS THE BIG DEAL ABOUT THREADING?" I have stood nose to nose with 54 year old VB programmers paid $95K+ as they insisted that that sort of programming is categorically unnecessary in an LOB departmental programmer's tool kit. "Yeah, but we'll never have to do that kind of thing around here! Why would it ever be necessary? That just isn't necessary."

Make no mistake about it: we are all going to have to program in parallel. We are going to have to use threads and PLinq and everything else that PCP throws at us. This is the only way our apps will be able to handle the terabytes of data we will be required to process in the next decade. The world changed in June of 2005, and most programmers have still not accepted this fact. Processors are not going to be getting that much faster in single-thread execution mode. Processors are going to become massively more parallel. We are only going to get faster by exploiting multiple cores at a time. We can only process increasing volumes of data by abandoning the single threaded and dual threaded application architecture. This means you will not be able to continue making a living programming in single threaded or dual threaded mode. Parallelism is the new God and maximum imperative of programming.

I find that most programmers in most languages are in a vehement state of denial about the gravity of our current CPU architecture. No where is it more stridently expressed than in the VB world. VB programmers are strident because they have great reason to fear. Many of the better VB programmers have tried their hand at threading. Most of them discovered threading caused all manner of problems in their applications. Basically, threading broke their existing application architectures. This is because of shared, global, mutable state, uncontrolled side-effects all over the place, synchronous communication between threads, and a highly imperative approach to coding?

So what about C# and Java? Java and C# programmers have a tendency to thread more in their apps, but these languages may not survive either. C# is a bit better off than Java. C# has absorbed many features of functional programming, although it is much more difficult than it should be to do immutable data. Also, F# is not a very good challenger for C# on the .NET side. Many question whether it was ever indented to be. Java, on the other hand, has not really absorbed anything of the gospel of functional programming. Java still preaches the old-style of lock-based, synchronous communications thread model. Java is also faced with a serious threat of replacement by Scala, which is surging in popularity all over the world. There is no real doubt that Martin Odersky intends Scala to be the general-purpose replacement for Java on the JEE platform. He has explicitly testified to this in interviews.

Neither Java nor C# is guaranteed survival in the coming marketplace, but I firmly believe Visual Basic is dead.

Monday, June 15, 2009

So just what is a Dynamic Language anyway?

It's remarkable to me how many friends I have in the business who aren't really sure just what the heck a dynamic language is anyway. Many things they say to me indicate that they don't know what a dynamic language really is. For the record, let's clear it up.

Let's start with a quartet of distinctions.
  1. Object Oriented Language
  2. Interpreted language
  3. Weakly typed language
  4. Dynamic language
  • An OOPS language is not necessarily interpreted, weakly typed, or dynamic.
  • An interpreted language is not necessarily OOPish, weakly typed, or dynamic.
  • A weakly typed language is not necessarily OOPish, interpreted, and not necessarily dynamic. This is one of the most frequent bits of confusion. Many believe Weak = Dynamic.
  • A Dynamic language isn't necessarily interpreted, but it is always OOPSy and weakly typed.
  • With that said, most dynamic languages are interpreted and weakly typed.
Here is a list of languages that are not dynamic, although some think they are:
  1. JavaScript
  2. VBScript
  3. Perl
  4. PHP
  5. Lua
Here are a list of languages that are truly dynamic:
  1. Smalltalk-80
  2. Python
  3. Ruby
  4. Groovy
So, you will notice that most of the popular and well known interpreted scripting languages are not dynamic. More obscure choices, selected by the ubergeeks, are the dynamic languages. Being weakly typed is necessary but not sufficient to be classified as a dynamic language. I would have said that being interpreted was necessary also, but then along came IronPython.NET and blew my world appart.

So what is the key? What distinguishes the dynamic language? The answer is in the meta-programming model. In a dynamic language, no class is ever final, not even at run time. It is not only possible, but common for a dynamic interpreter to mock-up or attach methods to a class or object at runtime. I can also tack on additional properties at runtime. I can also write code that edits its own source code, and dynamically reloads it on the fly. I can create self-mutating code. This is the key feature which permits research into genetic algorithms.

Can I attach new methods and properties to a class at run-time in Java and C#. No, not per se. There are ways to try to achieve a similar effect, but these belong in the category of dirty hacks. This approach is absolutely not supported by either of these languages. How about VB.NET? Nope. How about VBScript? Please! This isn't even an object oriented language! How can you attach new properties and methods to something that is not a class in the first place? Same thing goes for JavaScript.

So why the hell would I want to dynamically dispatch new methods I never intended to be a part of my class when I designed a wrote it? Why would want to dynamically attach additional properties to the object at runtime? Isn't this unsafe and unsound? Aren't I allowing the interpreter to alter my architecture in ways I did not intend?

Maybe, if you don't know the language and don't know how to use it.

What if I do know the language and do know how to use it, doesn't the mere existence of such a feature open the door to entirely new categories of unintended consequences and unpredictable behavior?

That is the rub now, isn't it? We have put out finger on the exact point of critique which serious computer scientists have been arguing about for several years now. Decades really. Ever since Smalltalk-80, this debate has been running among serious men of great learning and skill.

The majority have chosen not to use dynamic languages. The majority have chosen statically typed OOPs. A rather small minority have chosen dynamic languages.

So, why have those who have chosen dynamic languages chosen dynamic languages?
  • Very loose coupling. You can pass anything to anything. If it works it works. Very large systems can be built this way.
  • Interpreter safety: If it doesn't it throws a soft error and you make a correction. The OS does not crash.
  • Auto-Mocking. These days, we have come to recognize that we need automatic testability for all code. We call this Unit Testing. We aim for 100% code coverage. This means we want every line of code automatically tested in a unit test project. To facilitate this process, you need something called mock objects. These are objects that doppelgangers for dependencies your test unit needs to function, but they do not produce the real world side-effects. Real files do not appear and disappear. Real records are not created, read, updated, and deleted.
  • Mocking in a static language like C# and Java is a real pain. Python and Ruby just construct Mocks for you, dynamically attaching empty properties and methods for your. You never need to write Mock object when doing TDD in Python or Ruby.
Isn't there a substantial performance penalty for this approach?

Yep, Ruby and Groovy are the two slowest languages currently accepted in the world. Although Smalltalk and Python are faster, they are not that fast. We can pretty well destroy them with C#.

With that said, a fully compiled variant of Python, called IronPython.Net is now making some waves in the world. IronPython's compiler is written in C#, and it cranks out CIL like all the languages on the .NET platform. This CIL assembly is locked and loaded by the CLR and handed over to the JIT for transformation to x86 or AMD64 code. That code is greatly fattened and slowed by the dynamic approach to things, but it is real machine code. IronPython is now something like 8x faster than standard C Python. It is still not as fast as something like Java or C#, but it is still far better than it's brother and cousins in terms of performance.

I personally am very interested in IronPython.Net. However, I am more interested in Scala.

My religion is generally opposed to Dynamic languages. I believe in static typing. I believe in a strong compiler. I believe in things like checked exceptions. I love design my contract. I love annotating requirements on code. C# does not give me as much static verification as I would like. I hope Scala is better. Python means busting my version of the 10 commandments. This is risky, and I dislike the notion. However, I am trying to keep an open mind, and determine whether IronPython can provide some wonderful new service to my .NET applications.

Wednesday, June 3, 2009

I'll give you a buck for every example of VB.NET code in this month's MSDN magazine

So, I just got my copy of the June 2009 MSDN Magazine. This is Vol.24 No.6 of MSDN magazine. Not one fucking scrap of VB.NET code anywhere to be seen in any code example in any article. They did some Cobra code... what the hell is Cobra? They did some IronPython code. This is reasonable, especially in connection with test projects. Overwhelmingly, the content articles are illustrated with code examples in C#. Not one fucking shard of VB code anywhere.

I dare you to find one example of VB code. I have promised my co-workers $1 for each example of VB they find. If they don't find any soon, I will broaden the offer to the entire world. They are rummaging through the mag right now with disconcerted looks on their faces.

I could be wrong, but I am pretty sure I am right about this. I flipped through this mag 3 or 4 times because I could hardly believe my eyes. Microsoft used to have an official bilingual policy... just like Canada. Everybody had to speak both languages. Everybody had to print code examples in both languages in any MSDN piece. Well, this may not be true anymore.

Based on this issue, it would sure seem that everything is ALL-C# ALL THE TIME in Redmond Washington.

Whilst I like XML litterals in VB, I have to say that I am pretty well pleased by the representation of C# in MSDN magazine. I pretty much love it. All it well. It is as it should be.

Tuesday, May 12, 2009

The problem with C# 4.0


I hear many things about my favorite programming language these days.  Not all are favorable.  Some statements conflict.  There are a couple of facts about the latest edition (additions) to C# that cannot be denied.

1.  Anders has added in support for dynamic language interop.  This is mostly a matter of implementing a new static type whose static type is dynamic.  I guess that is the C# way to do it.  Basically, a declaration of 'dynamic X;' means we will obtain a boxed object with some metadata attached to it to let us know what type it really is.  This is almost like generics in Java.  The performance penalty is not too terrible at run time when all factors are considered.  Still, there will be a penalty.  This won't be high performance.  Worse, there is no question that this is full-bore late-binding.  In any late binding situation, you loose type safety.  No compiler can ever predict that any piece of late-bound code will ever work, or work in all scenarios.  The compiler will go along with it, presuming that you know what you are doing.  When the build is successful, do not presume that the compiler has endorsed your code.

2.  Anders has added support for optional parameters, so we can now have pain-free use of C# in connection with the dreadful, much-detested, out-dated, out-modded Microsoft Office object model.  Optional params are big in OLE2 or COM or ActiveX or whatever you want to call that disgusting old shit we used to do.  Lamentably, the dastardly Microsoft Office object model is still a filthy COM beast.  You should obtain less painful interops with all COM servers because of Anders most recent additions to C#4.  Still... this is a dirty thing to accommodate.  It is the programming equivalent of saying "have sex with HIV infected individuals, just use this condom".

3. There is also some stuff about co & contra-variance with greater safety.  I have never stubbed my toe on Co & Contra Variance, so I see no issues solved here.  I suppose this helps in a few marginal edge cases.  I know of none in my code.

4. Finally, Anders & co are re-writing the C# 4.0 compiler in C# itself, thus making the compiler available as just another managed code software services at run time.  The presumption is that C# 4.0 programmers will now use this handy-dandy, ever-ready compiler to engage in meta-programming techniques and tactics.  Once upon a time, this was the exclusive privilege of people coded with Interpreters.  Java programmers can only get access to this sort of dynamism through a jar library called Janino.

These last two items are of no consequence to guys like me.  Some may be jumping for joy, but I don't need a runtime compilers just now.  

Surprisingly, the announcement of these language features has tripped off a debate surrounding the question of whether C# is getting too big for it's britches.  Some say that C# is becoming an over-bloated Swiss Army knife.  I really think this debate misses the point entirely.  The issue is not one of whether the language is getting too big, but rather whether it is growing in the wrong direction.

Certainly, I do not welcome the advent of the dynamic type.  I don't need it, and I don't plan to use it.  I suspect that the mere presence of this type will slow the compiler considerably.  I hope it doesn't result in generally fatter CIL for all, which would in turn lead to fatter X86 code.

I am not a fan of optional parameters.  Optional parameters are element & aspect of a generally sloppy approach to programming.  That is why they are in VB.  They are not unlike the unstructured goto statement.  They are just bad for your code.  Either a param is necessary or it is not.  If it is necessary, it should always be required, and it should always be set.  Programmers should not be allowed to 'safely' ignore a parameter. This is one of the many ways in which unexpected and somewhat unpredictable behavior emerges in a system.  Programmers should be aware of the change in behavior flag settings will produce.  Of course, I realize that Optional Parameters were big in the days of COM.  This is one of several reasons why COM was a dirty, fifthly, nasty, ugly, disgusting, detestable, nasty, wretched, leaky, buggy, unstable, evil piece of shit.  [Did I mention that COM had no security model and was single handedly responsible for the Great Spyware Pestilence of 2003?]  There are explicit reasons why we flushed COM down the toilet in .NET.  One of the reasons optional params were rejected in C# is that we wanted to rid ourselves of this corruption.  Now here we go again.  Not good.

I understand why Anders reversed himself.  Many, many, many C# programmers have been bitching for years about how difficult it is to drive the Microsoft Office Model because C# does not support optional params.  Anders caved in under pressure from customers and the high command.  In a very real sense, I know this will make my life easier when I must do an Office automation project.  Still, I do not look forward to fully managed C# library, with no element of COM, which show optional params all over the place.  This eventuality will signal the arrival of sloppy coding techniques in the rather purist world of C#.

My real bitch is that the two language features we really need the most did not make it into C# 4.0.  What are those features?
1.  Traits, just like in Scala
2.  XML literals just like in VB.NET

In all honesty, these are the only two linguistic innovations I have seen in the past 6 ot 7 years that did not originate in the C# project.  

Traits are basically implemented Interfaces which you can tack on to almost any object.  They give you the advantages of interfaces with the advantages of mixins in Ruby.  The final yield is a language feature which gives you the reliability of single inheritance with some of the advantages of multiple-inheritance.

XML literals threw me for a complete loop.  XML literals took VB, a language which was dying IMHO, and suddenly made it the best language for the MVC framework. You don't know how much it hurts me to say this, but I would never want to code a helper class in MVC without the aid of VB.NET XML literals. This is not to mention all of the complex XML transforms which this system makes incredibly easy.  

XML literals are probably the greatest, most practical, most useful idea I have encountered in the past 5 years.  The biggest thing since Generics.  Scala also has a form of XML literals, but they don't seem backed by the sort of library power VB.NET offers.  In any case, C# should have had this feature.  Omitting XML literals from C# 4.0 is the same thing as capitulating the MVC framework to VB.NET.  Anders needs to take notice of this fact and make some corrections.

So the final verdict is clear.  C# is not guilty of getting too big.  C# is guilty of expanding in the wrong direction.  C# is guilt of not expanding in the right direction.  All-in-all I am quite disappointed.

Wednesday, May 6, 2009

Java programmers are just hog wild about AOP

Recently, a flaming post went up on the DZone.com website.  A Java fellow posted a blog in which he bluntly stated that he would much rather employ Java programmers than C# programmers.  The absolute point of the piece is that AOP is extremely popular in Javaland, and all Java programmers (young and old) are well versed in the paradigm.  Such cannot be said for the C# programmer.  C# programmers seem to think that AOP is for testing only.  We don't seem to understand the other mega-benefits of AOP.  For this reason you should hire Java guys, not C# guys

I've given this political statement consideration over the past several days, I am completely sure that this a further example of the ethnocentric and narrowly learned specialization of all programmers in our field.  You see, the Java guy clearly doesn't understand the deep strategy of C#.  Neither does he understand the deep strategy of Scala.  Neither does the C# programmer understand why AOP is totally essential to life in the Java ecosystem.  The C# programmer does not understand why a lack of events and delegates in Java would push the entire ecosystem towards AOP.

Let me tell you about it.

Since the dawn of time, the dream of programmers everywhere has been code reuse and modularity.  We want to stop writing the same code over and over again every time we start a project.  We want a nice library we can compose with.  Composition is a big fucking word in our profession.  It is the dream of the blue turtles.  We want to write just those little bits of app-specific logic our customers demand.  Composition and reuse are the absolute ideas driving most changes over the past 30 years of computer languages.

Simple OOPs is nothing more than an attempt to achieve composition and reuse.  In the days of Smalltalk, simple OOPS worked out fairly well.  Because of the weak typing system, simple objects did the trick fairly well.  It was easy to interchange BaseA object with ModifiedA object, or BaseA with BaseB, or ModifiedA with ModifiedB.  The weak and dynamic typing system just went along with it, and if it could work, the interpretor would make it work.  For this reason, composition was pretty dang easy in Smalltalk.  There was pretty good code reuse in Smalltalk.

The problem is that a lot of us hate interpretors and weak typing systems.  A lot of us out-right fear dynamic systems for their unpredictable behavior and their slowness.  Others hated the ugly and goofy looking interface of the Smalltalk virtual machine.  Ergo, Smalltalk was not widely adoped and it ultimately died.  It was always a niche thing which the Ubergeeks used.  It is now having an interesting undead afterlife in the form of Squeek, but Smalltalk is dead none the less.

In response to Smalltalk, a number of strongly typed and compiled OOPs languages were invented.  C++ and Object Pascal were two perfect examples of this movement.  The problem was that they were only half-assed OOPs languages.  They were also structured.  A lot of Dephi programmers programmed in flat structured Pascal and claimed to be object oriented.  Likewise, a lot of C++ programmers were really writing C code and claiming to be object oriented.  As you might imagine, this approach did little for Composition and Reuse.  Worse still, once the language is strongly typed, interchange of BaseA object with ModifiedA object, or BaseA with BaseB, or ModifiedA with ModifiedB is no longer possible.  The signature of a function or a method demands a specific type.

Next, Java came around with a simple language and simple OOPs and the notion that you should inject dependences in an AOP fashion.  In this approach, you pass everything by interface type or abstract class.  This loosens the type system enough so that interchange of BaseA object with ModifiedA object, or BaseA with BaseB, or ModifiedA with ModifiedB is possible... with in some limits.  IoC and DI were formalized in things like Spring, AspectJ, and Guice to make this even easier.  You get pretty good code reuse in this AOP approach to doing business.  Implicit within this success story is a clear cut admission that pure OOPs doesn't yeild composition and reuse when you are compiled and strongly typed.  You need to take an AOP approach, or you don't get composition and reuse at a good high level.  

At roughly the same time (a bit earlier) Delphi came around with the notion that you should have something greater than objects called components.  These should be wired together with delegates and events.  Delphi died.  Java won by default.  Delphi was reborn as .NET and C#.  Suddenly the battle was renewed.

Of all the things that have triggered the greatest number of flame-wars between Java and C# programmers over the past 8 years, a pure lack of understanding of the .NET Component is top of the list.  C# programmers know they have components.  Some are dimly aware of how they work.  Many of them have no ideal in the world that Java lacks components.  They don't understand that strategy and command patterns are built into the CLR as first class citizens.  Specifically, strategy=events and command=delegates.  [Some would rap my knuckles for making that hard fast association.  Bring it on.  You'll get knocked the fuck out.]  These first-class citizens of the .NET framework are the foundation of components and loosely coupled, modular, composable, reusable code framework that we enjoy in the .NET system.

Java programmers do not understand this.  Events, delegates, properties, full lexical closures, all these things work together to make AOP far less necessary in the .NET programmer's life than in the Java programmer's life.  Coversely, because you lack events, delegates, properties, full lexical closures in Java, you need AOP much more than we do.  We get good composition and reuse without AOP.  We get good simple code without AOP.  If you think your AOP code is clean, you should see our component code.  It is wired together and loosely coupled with Events and Delegates.

This is not to say that AOP is completely irrelevant to the C# programmer.  Many of us, especially me, are asking serious questions these days like:

1. How can we take advantage of some of the goodness of AOP?  
2. When should I select AOP rather than a component approach?  
3. What are the specific scenarios where AOP is preferable to writing a component?  
4. Can AOP improve certain approaches where components don't work so well?
5. If so how?

So far, there is a lot of debate about this topic among serious minded C# programmers.  We don't have a lot of clear-cut, indisputable design-wins and use-cases for AOP.  Logging application activies and errors is one winner.  Automatic unit testing of production code is another.  This is strong sign of the high value of components.  They only break down {with certainty} in a couple of scenarios.  Of course, we do not yet understand AOP as well as we will in 10 years time.  In that time we may have even more compelling use-cases for AOP.  Right now, we don't.

It should be noted in passing that several veteran polyglot programmers are rallying against AOP in the .NET world.  In particular, the gentileman scholar Ted Faison has written a super book called "Event-Based Programming:  Taking Events to the Limit".    I remember Ted well from my Delphi days.  He was a great Delphi programmer.  Like most of us, he moved with Anders to the .NET platform and C#.  I like him and respect him, so I read his book.

Although Mr. Faison doesn't say it bluntly, a careful consideration of what he says in this book boils down to the following: "All of you C# programers runing toward AOP are headed in the wrong direction.  Don't do AOP just because the Java guys do AOP.  Use the events and delegates in our systems to wire together components, and you will obtain better and looser coupling.  This will give you the best possible composition and reuse you can obtain today."  I reached a conclusion that mega-AOP, as it is practiced in Javaland, is a specific solution pattern, in a specific language, which lacks key attributes we have.  Ted Faison might not have actually said that bluntly in his book, but I achieved that realization from reading his book.

This conclusion becomes even more interesting when you consider the design goals of the Scala language.  For those who don't know about it, Scala is Java's replacement and successor. This is like Marcus Ulpius Nerva Traianus taking over for Marcus Cocceius Nerva.  It is the succession from a good emperor to a better one.  Martin Odersky is explicit in his design manifesto for Scala.  Two questions drive him:
1.  Can we provide better language support for component systems?  He wrote a nice paper and did a video about this subject.
2.  Can we find a perfect fusion of Object Oriented Programming and Functional Programming at the same time?

Dr. Odersky very delicately offers a tough critique of Java in Scala.  You don't actually have a delegate in Scala, but you don't need one.  You pass functions without any delegate machinery in any functional language.  You don't have interfaces in Scala, but you don't need them.  You are better served by Traits, which give you all of the advantages of Interfaces, Mixins, and Multiple inheretence.  You don't have to pass things by Trait or by abstract class in Scala.  Type Inference and fully-orbed Generics will get the job done for you.  This generic based polymorphism obviates the need for OOPs polymorphism.  All of this adds up to a system which can do serious components, composition and reuse.

But this Scala approach to the problem is not well understood by Java programmers.  This is where they are going to have a big problem adjusting to life in Scala.  As a proof consider this:  A Java programmer learning Scala posted up at the Scala-Lang.org site.  He asked the following question:  How do I use Guice in Scala?  Since he was a Java programmer who did not want to reinvent the wheel, the question is perfectly understandable.  One of the Scala team members on the site answered his question.  Paraphrasing, the team member answered that AOP is basically out of place in Scala.  You can use it.  It is not recommended.  It isn't the Scala way of doing things.  Scala uses components.  Scala achieves composition and reuse by loosely coupling components together.  You don't have to inject methods, objects or any dependency anymore.  Use a trait, or pass the function or class to the object.  The old problems are not a big deal anymore.  To get the juice out of Scala, you need to get the Guice out of Scala.

The Java guy seemed extremely disapointed that the Scala team did not appreciate the great value of AOP.  It was likewise, I am sure.


Monday, May 4, 2009

Why I don't like talking to other programmers online

Those who I work with know I don't spend much time online interacting with my fellow programmers.  Sometimes I regret this.  The websites we have today for interaction are vastly superior to the ones we used to have.  The web software itself is better, but the content is also much better.  You have multi-disciplinary coders, working with many languages, and many architectural patterns all sharing one common web forum these days.  Unfortunately, these cats seem to be in a perpetual flame war.

The problem is that most of the heavy users of these sites are beginners.  They ask a lot of questions, because they don't have much knowledge or experience.  Intermediate pros answer these questions.  They give very stereotypical platform-centric, ethnocentric advice.  You get rote Java advice from Java programmers.  You get rote VB advice from Visual Basic programmers.  Same thing goes for all the languages.  These guys do not know or understand each other's approaches.  No VB programmer is particularly honest about, or even aware of the flaws in his game plan.  Same is true for Java programmers, C++ programmers, C# programmers, Python programmers, etc.  Ethnocentrism reigns supreme all over these forums.

Very few of the guys on these forums have more than 10 years of experience.  Most of us stop coding before we reach age 10 as professional coders.  We get kicked up the chain of command.  This is what makes a brilliant guy like Robert C. Martin so amazing.  He has been coding for 39 years.  He just never stopped coding.  This is a deeply experienced polyglot programmer who really knows the answer to the question "Why?" because he has seen the full evolution of software development.  He lived through every movement. 

Recently, Uncle Bob had a smack down with the dudes at StackOverflow.com.  I was rather appalled.  The Stackoverflow gang wrote the site in C# and the Microsoft MVC framework.  Ergo, they are members of my most recent tribe.  I was shocked to see such good members of my tribe speak so ignorantly.

It was pretty clear to me that the very brilliant young entrepreneurs at StackOverflow.com did not use the SOLID techniques Uncle Bob advocates.  It is also clear that they felt threatened by the advent of SOLID, because an acceptance of these techniques would marginalize the StackOverflow.com software itself.  They took Uncle Bob to task for preaching SOLID.  As you listened to these podcasts, forum posts, and blogs, it became clear that the noise coming from the stackoverflow gang was driven of their own personal insecurities about the status and longevity of StackOverflow.com rather than the validity of Uncle Bob's SOLID principles.  They didn't want to talk about SOLID.  They wanted to talk about owning a business.

Uncle Bob knows this.  He did a caper Podcast with Scott Hanselman in which the recapped this smack down.  Although he didn't explicitly say it in this way, Uncle Bob gave me every impression that he knew he was dealing with young and insecure entrepreneurs floating on their very first life-boat on the high-seas of commercial SAS.

Older, veteran, multi-lingual programmers with 14 years of experience and 4 or 5 major languages worth of professional experience have a hard time finding people to talk too.  Programmers are rare.  Veterans are scarce.  Multi-lingual veterans are seldom seen.  I wonder if it is possible to form a club where the scarce 2000+ of us might get together on line.  We would have to restrict membership quite sharply.  No monolingual or mono-platform guys allowed.  If you are Unix only, you are out.  If you are Mac only, you are out.  If you are Windows only, you are out.  If you are C/C++ only, you are out. If you are VB.NET only, you are out.  etc.

If such a forum existed, it would be possible for serious and objective software engineers to discuss the patterns and practices of the various languages and platforms free from the sort of fundamentalism that drives so many in our field.  That would be a wonderful thing.  Can you imagine an objective discourse on the strengths and weakness of software systems?  Could be very beneficial.  

But you just can't discuss the pros and cons of Islam with Osama Bin Laden.  Neither can you discuss the pros and cons of VB.NET with a mono-lingual VB programmer.  Neither can you convince a Java programmer that Microsoft uses its own software to write MSN or Microsoft.com.  They think must be running on Unix because Windows doesn't scale.

The last time I had a "conversation" online, it was with a young Java programmer.  I am sure he would have embarrassed James Gosling.  I was doing a comparison of Java, C# and Scala.  He quickly got belligerent.  He seemed to think I was unfairly putting Java down, by not recognizing it's innate superiority to C# and Scala.  His response was to resort to mocking derisiveness.

The lad didn't seem to know that Java generics aren't real generics.  He didn't seem to know that Java lacks full lexical closures.  He was sure it couldn't be important if Java didn't have it.  He didn't know that closures have been burning topic #1 in Javaland for some 2 years now.  He didn't know what an event is.  He didn't know what a delegate is.  He didn't know what a Scala trait is.  He was quite sure Scala could not replace Java because it could never be as cross-platform as Java! (!!!)  He was very rude & insulting in the process.  Young men like to play king of the hill, and they want to beat you down.

I can recall the bad days of 1994 when I was first doing a bit of professional work on the market.  I was very insecure.  I didn't know if I could make it in life doing this particular line of work.  I didn't know how long it would last.  I did know the other options were worse.  But still, this profession might not be for me.  One thing I knew:  Borland and Pascal were my only life-line.  An attack on these agents was an attack upon my livelihood.  I took any objective criticism as an attempt to sink my ship.

I am sure that the young Java lad was in the same boat.  He is probably working on his first pro assignment.  He knows some Java and nothing else.  Any objective critique of Java is immediately interpreted as an attempt to sink his financial lifeboat.  He must respond vigorously.  Most other members of the tribe feel this way.

Still, you can imagine that a guy like me dislikes dealing with a young guy like that.  He can't discuss the pros and cons of his system.  He just isn't experienced enough.  At this point, he would not be willing to be honest about it.  He really can't hear what you say because fear of financial collapse.

This is the rant of a lonely old programmer who has nobody to talk too.  Many of these points are being refreshed in my mind day by day as I learn Scala.  Many programmers around me, particularly those I work with, have no idea why I would ever want to learn a language like Scala.  They believe I am letting down the Microsoft faction.  

What these young fellows don't know or understand is that you can't get married for life if you are a programmer.  Computer languages are not women.   You do not wed a language or vendor as you would a woman.  Even if you did, languages and vendors die, ergo in death shall you part.  Neither are languages religions.  You do not experience a religious conversion experience and become a programmer of language XYX.  

Thursday, April 23, 2009

C is going the way of Assembler

http://stackoverflow.com/questions/783238/why-windows-7-isnt-written-in-c-closed

So, a kid just popped up on StackOverflow.com and asked the question “Why isn’t Windows-7 written in C#?” All of the C/C++ programmers on the form blanched at the thought of it. Many others posted about project Singularity, which is a research project inside Microsoft aimed at producing a full operating system in managed code. The comments from the C/C++ guys clearly indicate that they hate the entire idea of Singularity.

The question was closed by one of the forum moderators as “Not a real question”. If he had been honest, he would have said he was shutting down a potential flame war. The question is too politically hot right now.

This is a further evidence that the C/C++ programmers are getting pretty edgy about the incursion of .NET and managed code into sacred ground reserved for them, and them only. They have already lost nearly all of their Windows application turf to .NET, and they are still smoldering about that. Like American Indians on shrinking reservations, they are under presure, and they feel it. They are feeling some real job security stress right now. Every time these questions come up, they C/C++ programmers on the site get pretty dang edgy.

Let’s face it folks: C is going the way of Assembler. It is the expressed wish of the supreme management at Microsoft and SUSE to write as much of their operating system in managed code as possible. Inteface widgets, control pannel utils, little apps, system services, you name it: If there isn't compelling reason to go C, it should be done in C#. C/C++ are increasingly seen as costly and yeilding no real substantive benefits anymore.