Thursday, March 5, 2009

Y2K supplemental

I was looking for a good article that explained Y2K from all angles -- how the problem arose, and an why it ultimately wasn't as serious as people thought.  I couldn't find it before posting, but I like this 1999 article by Tom Christiansen, a major contributor to the Perl programming language.  Not only is Tom an excellent writer, but he accurately dismissed Y2K concerns as overblown before it hit.  You gotta respect a guy with that kind of clear thinking, as well as the courage to voice an unpopular opinion.


1 comment:

  1. Another article I read a long time ago (before 2000) pointed out that the "Y2K" problem had actually shown up long before "Y2K", in the banking industry.
    Think, for a moment of a 30 year mortgage issued in 1971...Even calculating the amortization tables wouldn't have worked too well unless you can account for the century. So, as I pointed out elsewhere, the reality was that it was being dealt with very early, and by the early-mid 90's, there were lots of COBOL programmers getting overtime to fix lots of these issues. New systems were being written using built in date functions.. (not all, the POS I was working on had 3 fields in one table, month, day, and year...sheesh!) which potentially eliminated those pieces of software from future problems.

    Although, I like the commentary about why memory savings was a lie. Consider, a year can be coded in 10 bits (minimum, unsigned, then you get a _real_ y2k problem in 2048...), where using ASCII for year values (even 2 digit). requires 16 bits. Not only that, but since they are stored as ASCII, there will need to be code to convert those values to a numeric prior to doing any kind of calculation, which will cost real memory (not just storage) and clock cycles to do the conversion.
    So the better explanation is laziness. Easier to format reports with a 2 place ascii for a year, than it would be to format a numeric value.

    ReplyDelete