I want one of these setups. NOW.
A blog about the fun and frustration of software development, about desirable practices and bad habits, and about the importance of computers and technology in everyday life.
Friday, March 27, 2009
Friday, March 20, 2009
Artificial intelligence in games, part 1
"Chess is not a game. Chess is a welldefined form of computation. You may not be able to work out the answers, but in theory there must be a solution."
 John von Neumann
I'd like to talk about some basic concepts of Artificial Intelligence, by talking about games. In a few posts I'm going to tell you how to play a perfect game of chess. Not just a good strategy, but a perfect strategy. With this strategy, you will always beat a nonperfect opponent. If no win is possible, you will most likely be able to end the game in a draw.
Unfortunately, once I've explained this strategy, you will realize that you can't actually use it. So before I talk about that, I'm going to try something less ambitious. I'll tell you how to teach your computer to play a perfect game of tictactoe, also known as "noughts and crosses" in England.
Here's some quick background for those who don't know how to play. Tictactoe is played on a threebythree grid. Two players mark their squares, one with the letter "X", the other with the letter "O". Traditionally the X player goes first, and may place a mark on any open square. The players continue to alternate marks until one of two conditions is met. A player can win by placing three of his or her marks in a row  either horizontally, vertically, or on a diagonal. Much more commonly if the players are experienced, the board will fill up, no player will get three in a row, and the game ends in a draw.
Complexity of tictactoe
How many possible games of tictactoe are there? One way to do it might be to count all possible sequences of moves. The first player has nine possible moves to make. The second player can only go in an unoccupied square, so that's eight moves left. The first player can then go in any of seven remaining squares. And so on, so the number of possible games is at most 9*8*7*6*5*4*3*2*1.
This is a common mathematical operation called a factorial. "9 factorial" is abbreviated as "9!" Factorials are very big. Some algorithms have a running time of O(n!), which can be extremely dangerous  but I'll get back to that in another post. Luckily, in this case n is only 9, and 9! is a fairly manageable number: It's 362,880.
That seems like a big number to people who are good at thinking in the abstract but bad at storing large amounts of information. However, computers are the opposite, and 362,880 is a very small, manageable number of objects to explore.
It is also an upper bound. We know that there are no MORE than 362,880 possible games of tictactoe, but most of the games resemble each other. Another trick we could apply is to calculate possible states of the board without thinking of moves. Each square can be in one of three states. Either it is empty, or it has an X, or it has an O. Thus, the total number of possible states of the board (legal or not) is 3^9, which is 19,683. So actually, there are no more than this many different ways to set up the board.
In fact, there are even fewer possible states, because many of them aren't legal. For instance, the board could contain all X's, but that's not the state of a real game, because the "O" player never got to move. So there are actually far less than 19,683 states, but all we need is a rough order of magnitude estimate to see how big a problem we have ahead of us.
Calculating all possible games of tictactoe
If you go first in a game of tictactoe, there are nine moves that you can make. The possibilities break down like this:
 X    X    X
 ++ ++ ++
      
 ++ ++ ++
      

      
 ++ ++ ++
 X    X    X
 ++ ++ ++
      
      
 ++ ++ ++
      
 ++ ++ ++
 X    X    X
Actually, these nine positions do not really represent unique games. Really, there are three unique moves that you can make: Go in the center, go in a corner, and go on an edge. The rest of the moves simply lead to rotated or mirrored versions of the same games. Some algorithms might take advantage of this similarity and reduce the search space, but for our purposes, it still doesn't take too long to treat all combinations as unique.
Now, if you go in the upper left corner, there are eight moves that the "O" player could make. Here are a few.
 X  O  X   O X  
 ++ ++ ++
     O  
 ++ ++ ++
      
Again, these do not represent unique games. For all practical purposes, the first move and the third move are the same game. But it's easier to program (though slower to run) if you don't bother trying to analyze the games.
In the next post, I'll explain how we can use this system for categorizing every possible game to write an unbeatable opponent.
Wednesday, March 18, 2009
Recursive sorting 2: QuickSort
I got good feedback from my last post about recursive sorting, some of it by email and in person. I also heard some serious concerns about my world domination scheme. You can all relax, because my Evil Generals are all away on extended vacation for the foreseeable future. This post probably won't be as amusing as the last one, as I've already explained the basic groundwork. But I'll try to give you the basics so you can do QuickSorting yourself.
Here are the fundamentals of how QuickSort works. In Merge Sort, we split up two decks of cards by size, and then sorted each half. How exactly you "sort each half" is up to you. If the remaining divided up decks are still large, you can just divide them in half again and run two more Merge Sorts. But if the decks are small, you can sort them in another way. Even Selection Sort might be a good choice if you have no more than, let's say 10 cards. OR, you can just keep dividing and dividing until you get down to two or three cards, and then perform a couple of swaps to get the cards in order.
QuickSort is similar, but a little different. We still split up the cards, but in this case we split them around a card value instead of a specific deck size. Here's an example. Start with unsorted card:
 8, 4, K, A, J, 5, 9
In QuickSort our plan is to start with the first card (8) and put it in the correct place. You remember that in Merge Sort we did the recursive step  the splitting up into smaller problems  first, and then the action of actually moving cards around was saved until we reached the lowest level (merging the cards). That's when we actually do something constructive... like telling an evil private "Go kill that guy." In this one, we do the reverse. The action step comes first, and then the recursive step second. After the 8 is in the right place, we'll sort the cards around it, using another call to QuickSort.
When I say we want the 8 in "the right place," what I mean is that we want all the cards that are less than 8 to be on the left side of the 8, and we want all the cards that are greater than 8 on the right side.
How do we get there? The 8 starts in the low position, so put your left hand on the 8, then put your right hand on the right side of the list. That's 9. 9 is bigger than 8. No switch is necessary yet, so move your right hand one space to the left, toward the 8. You're now on 5. 5 is smaller than 8, so it's in the wrong position. Do a swap and you get this:
 5, 4, K, A, J, 8, 9
So now the 8 is on the high side. It is correctly positioned with respect to some of the cards: it is between 5 and 9, which is where it should be. But you're not done yet. Now you have your left hand on 5, right hand on 8. Move your left hand up one space, to the 4.
4 is less than 8. Keep going.
King is more than 8. It's in the wrong place. Time to swap.
 5, 4, 8, A, J, K, 9
Now the 8 is correctly sorted with respect to the cards you already looked at. It is more than 5 and 4, but less than king and 9. Nearly done.
The 8 is now on the low side. With your right hand on the king, move backwards. The jack is in the right place. The ace is smaller than the 8. Swap.
 5, 4, A, 8, J, K, 9
...your hands now meet in the middle, and you're done. The 8 is in the precise position where it should be.
Now what? Here's the recursive step. You have the 8 in the right place, but you have three cards on the left that are totally scrambled, and three cards on the right that are also scrambled. How do you unscramble them? You QuickSort them, of course.
First you sort this array:
 5, 4, A
Then you sort this array:
 J, K, 9
Do it in the same way as before. For the first set of three cards, put your left hand on the 5, and move it to the proper place (the third position). For the second set, put your left hand on the jack, and move it to the proper place (second position, between J and K). By the time you've finished, we discover that all the cards are sorted. Quick, wasn't it?
Seriously, get your own deck and try this on 20 cards. It's easy and fun once you get the hang of it. Lay them out in a line. As you work, you'll put a card at a time in its final location. When you do this, push the card up slightly so it will stand out from the others. Then work on the unsorted cards from left to right until you're finished.
Complexity of QuickSort
Starting with the first card, you made one complete pass through the array in order to put it in the right place. So this algorithm is at least O(n). Then you performed this pass through the array multiple times. How many?
In the best case, QuickSort does the same thing as Merge Sort: it splits the array almost exactly in half. (In the above case, our first number, 8, landed smack in the middle of the array, with three numbers on each side. That's a perfect split.) So basically, you're dividing by two each time, and after (log_{2} n) divisions, you're down to sorting one card  just like what we did in a binary search. So we simplify this and say we examined "n" cards approximately "log n" times, so as I indicated earlier, this algorithm is O(n*log n).
If you're lucky!
But what happens if you start with an array that's already sorted?
Then you'll "split" around the ace... but there are no cards at all on the left, and (n1) cards on the right. So you'll do almost as many comparisons as before against the 4. Then against the 5. And so on, until you've compared every card to every other card. This takes O(n^{2}) time. Needless to say, that's bad.
So what we see here is that there's a difference between the complexity of a typical execution of an algorithm and the worst case scenario. Knowing that, we can do things to plan for the worst case.
Unfortunately, sooner or later it's highly likely that you will accidentally try to sort an array that is already sorted. So although it is counterintuitive, we can guard against this by scrambling the array first and then sorting it. Scrambling is easy... all you do is go through the entire array (n operations), pick a random number (picking random numbers takes constant time, so it's O(1)), and swap the current number with the one at the random location.
This takes 1*n moves, so it's O(n) time. Then "scramble + sort" would take O(n log n + n) time But n is of lower complexity than n log n, and at high numbers it will be trivially small in comparison. So the "scramble + sort" routine is simply expressed as O(n log n).
In the best case, QuickSort does the same thing as Merge Sort: it splits the array almost exactly in half. (In the above case, our first number, 8, landed smack in the middle of the array, with three numbers on each side. That's a perfect split.) So basically, you're dividing by two each time, and after (log_{2} n) divisions, you're down to sorting one card  just like what we did in a binary search. So we simplify this and say we examined "n" cards approximately "log n" times, so as I indicated earlier, this algorithm is O(n*log n).
If you're lucky!
But what happens if you start with an array that's already sorted?
 A, 4, 5, 8, 9, J, K
Then you'll "split" around the ace... but there are no cards at all on the left, and (n1) cards on the right. So you'll do almost as many comparisons as before against the 4. Then against the 5. And so on, until you've compared every card to every other card. This takes O(n^{2}) time. Needless to say, that's bad.
So what we see here is that there's a difference between the complexity of a typical execution of an algorithm and the worst case scenario. Knowing that, we can do things to plan for the worst case.
Unfortunately, sooner or later it's highly likely that you will accidentally try to sort an array that is already sorted. So although it is counterintuitive, we can guard against this by scrambling the array first and then sorting it. Scrambling is easy... all you do is go through the entire array (n operations), pick a random number (picking random numbers takes constant time, so it's O(1)), and swap the current number with the one at the random location.
This takes 1*n moves, so it's O(n) time. Then "scramble + sort" would take O(n log n + n) time But n is of lower complexity than n log n, and at high numbers it will be trivially small in comparison. So the "scramble + sort" routine is simply expressed as O(n log n).
Of course, you could be super unlucky and wind up with a scrambled array that happens to be perfectly sorted. But as you get a very large number for n, the chances of that happening become extremely small.
Another way to randomize this thing is by picking random pivot point. Instead of always trying to position the first card, pick a RANDOM card, move it to the leftmost location, and put that one in position. This way you fix the problem of potentially starting with the smallest card every time.
Labels:
Big O notation,
recursion,
sorting algorithms
Saturday, March 14, 2009
Recursive sorting, part 1
I thought I could get away with just a brief mention of QuickSort, but apparently some people really want to hear details. So here you go. Today I'm going to talk about something that is not QuickSort, but it is a closely related technique called a Merge Sort. This is tangentially related to the discussion about Big O notation, but it's not really my main focus here. I'll just tell you in advance that, like QuickSort, this takes a time of O(n log n).
We've just covered a fundamental principle of recursion, known as "divide and conquer." In this case, we talked about literally dividing and conquering territories of the world. Now we're going to bring it back to the domain of sorting cards.
Along the way, I'm going to have to explain the basics of recursion to the novices, who may have been a little uncertain about what to make of the story about my dream.
Understanding recursion
Imagine that I am playing the board game "Risk" and I have an objective to take over the world. No wait, better idea. Imagine that I am an Evil Genius, and my actual objective is to take over the world.
The world is a big place. Too big to plan my attack in every minute detail by myself. How am I going to go about this?
Presumably I need military commanders, so I'll assume that I have a number of Evil Generals at my Evil Headquarters, who are expert military strategists. Luckily, the world is already divided up into several logical categories, known as continents. So I formulate a plan. First I'll conquer Europe. Then I'll conquer Asia. Then Africa. Then Australia. Then North America. Then South America. Conquering each region will help me gain additional bases, which will aid me in the remainder of my nefarious plans.
So I pick my six finest generals, and I say "Okay, listen up minions! YOU. Conquer Europe. I don't care how you do it, just get the job done and report back to me. YOU. Conquer Asia." And so on. I assign one continent to each general. Then I sit back and relax, confident that my brilliant generals will each do their job thoroughly and well. My part in the plan is finished.
But continents are still big places, so my Generals now have quite a job in front of them. Consider the poor General whose job it is to conquer Europe. Where does he start? Well, like me, the logical thing to do is divide up the work, and the way to do that is country by country. "First I'll conquer France," he says. (Ha ha, get it? That's apparently screamingly funny if you're a neoconservative. I aim to please a wide audience.) "Then I'll conquer England. Then Germany." Etc.
So my Evil General calls in all of his Evil Colonels, and he says "Okay, you get France, you get England, you get Germany," etc.
The Colonel in charge of conquering England then calls all his Evil Majors and explains "You conquer London, you get Southampton, you get Lancashire," etc.
You can expect this process to trickle down through several more layers of authority, but here's the thing: you can't go on subdividing the territories forever. You can't run an organization with only a bunch of managers, because sooner or later somebody has to do the actual work. You can't have some little underling running around saying "Okay, you conquer this microbe, and you conquer that microbe."
What you need is a terminating condition. On some level, you simply have to have a whole lot of Evil Privates, and the Evil Sergeants will point at the enemy and say "Go now and kill that guy over there." Privates don't get to delegate.
So my Privates kill people, which is something that is in some way completely different from what everyone else is doing, which is just splitting up the territories and telling somebody else to do the work.
Eventually, all my Privates report to all my Sergeants, announcing "Territory secured, sir!" Then all the Sergeants report to their respective Evil Lieutenants, and so it goes. Up the chain, eventually reaching my Colonels, who say "Country secured!" and my Generals then come to me and say "Continent secured!" and then I chortle with glee in my Evil Lair. "I've done it!" I say. "I own all the continents, and the world is now mine. MINE! Mooha, mooha, mwahahahahaha haaaaaaaaaaa!!!"
Or words to that effect. I'll work on my speech when the time comes.
Today, Merge Sort. Tomorrow, the world
I give you an entire deck of cards to sort, and I say "Put these cards in order. You have three minutes." I know, those instructions are a little unclear because the suits can come in any order. Typically in a new, unshuffled deck, all the spades come first, then the hearts, then diamonds, then clubs; and then within the suits, the cards are ordered from Ace to King. We'll assume that you and I both know and agree on the order of a "correctly" shuffled deck.
So you have three minutes to sort the cards. Now, you know your cardsorting abilities really well, and you think to yourself, "Hmmm, I know how fast I can work. I can finish about half a deck in two minutes. If I do that I'll finish almost all the work, but not quite. I'm gonna need some help."
So you call in a friend, who also knows everything there is to know about correctly sorted decks, and works just as fast as you do. You say to your friend "Hey, glad you're here. Here's what I want you to do. I'm going to split these cards in half, and give half to you. You sort your half, I'll sort my half. Then when I'm done, we'll deal out our two sorted halfdecks into one pile, or 'merge' them, and we'll have a whole sorted deck."
So you divide up the deck into two. You have 26 cards, which you sort using your favorite technique (you can use selection sort if you want). Your friend sorts his pack of 26 cards, and then you get together.
You have the ace of spades. You lay it face up on the table. Your friend has the two of spades. He lays his down. Your friend also has the three, so he lays another one down. You have the four, so you lay it down. You continue until the whole pack is done; flip the pack and you can deal them out in perfect order, ace first.
You finished your deck in two minutes, and your friend finished his deck in two minutes. Both of you worked in parallel, so the two of you were done in just two minutes, not four. Merging your decks goes pretty quickly, only taking up another minute. Three minutes have elapsed, and you're done.
Double your fun
I tell you: "Great job sorting that pack, now I've got more for you to do. I have two different packs of cards this time, all mixed together. I want you to sort them out into two piles, one with blue backs and one with green backs. I know this will take more time... I'll give you five minutes."
How long does it take to sort a whole deck? From the last job, we know it takes three minutes. Clearly five minutes won't be enough for two decks, so we're going to need more friends.
So we get two more people in on the action. You tell your friend: "I'm going to split up these 104 cards into two piles of about 52 cards each. You get your half sorted, and I'll get my half sorted. Sort them the same way you did before, but this time always put blue cards before green cards."
But selection sort is way too slow, so you then take your 52 cards and turn to another friend. You say "Now I have 52 cards. You take 26 and I'll take 26, and we'll each sort them in two minutes."
Two minutes later, you're both done. You take another minute to merge the decks. Now you have 52 sorted cards. It's been 3 minutes.
You turn to your friend and discover that, with the help of the fourth guy, he also has a sorted pack of 52. You merge the cards  there are more than before, but you finish dealing them together in another 2 minutes. Five minutes are now up, and you have a single pile of perfectly sorted cards. Then all you have to do is separate them in the middle, to get two decks.
Generalizing the solution to "n"
By now I'm sure you've got the idea. I can sort four decks, or eight decks, or any number of cards I want. All I need is more friends. Your friends aren't "free"  they take up a lot of space, demand lunch breaks, and if you're asking for a lot of work they might want to get paid. If you don't like that, then you're free to go back to selection sorting the whole deck. But with one person performing an O(n^{2}) algorithm, the job will take hours if you have a lot of cards. Much better to keep dividing and subdividing your packs until you have a manageable little pile, and then it gets easier to merge a few piles at a time.
The analogy to taking over the world should be clear. "Divide and conquer" is in fact a well known programwriting technique. Splitting the deck and handing the job to friends is directly analogous to filtering the world by smaller pieces of land, and then handing them to lower ranking underlings.
Another point this illustrates is the frequently recurring theme of trading space for time. As I said before, the more friends you have sorting cards, the more costly the job is. Inside the guts of your computer, what this means is that using a recursive algorithm eats up more of your computer's memory. This can cause everything else in your operating system to run slower, if you start using up too much memory.
On the other hand, computers these days have a lot of memory, and unless you're trying to run World of Warcraft while you're computing, most of your memory is often sitting idle. Why not throw more storage at the problem, if the time saved is so great?
Better yet, thanks to advances in parallel computing, it's now possible to actually physically divide up the work among multiple machines. You order one computer to sort half the cards, and another computer to sort the other half. Then you just take up a little network traffic to have them report to a master server, which consolidates the information.
One example of a popular program that takes this approach is the SETI@home project. We may never discover intelligent life elsewhere in the universe. But if we want to try, we've got massive task collecting readings from different areas of space. SETI takes advantage of the idle time on computers whose owners have voluntarily installed their program. Your desktop runs the program when you enter screen saver mode, analyzing different areas of space, and the SETI server gathers whatever interesting information that you discover.
Now that everyone's up to speed on the basics of divideandconquer, I'll go over QuickSort next time.
Thursday, March 12, 2009
BigO Notation 3: Showing off how you think
Some job interviewers ask you to describe your career history in great detail. Some want you to to explain the most obscure facets of the language you'll be working with. And some will give you a puzzle and ask you to design a program that solves it.
This last category is by far my favorite sort of opportunity in an interview. I'm a programmer because I love solving puzzles, so showing off that love is far more likely to make me look good than struggling under pressure to remember whether it's ArrayList or Vector that is threadsafe. (Tip: It's Vector.)
I immediately jumped to the most obvious solution  not because I expected it to be right, but because I wanted to find some answer quickly and then improve it. I said:
When you get asked a puzzle problem, remember above all else that the interviewer is much more keen to know how you think than whether you get the "right" answer. There may be some specific technique that they are hoping you know. But even if the answer is not immediately obvious, don't panic. If you sit there and explain your reasoning, refining your technique out loud, you can make up a lot of ground by showing that you can think on your feet.
A good example is the question I got from the Director of Engineering at Digital Motorworks. Not only did I get the job, but that portion of the interview went so smoothly that I wrote down the question, my answer, and his reaction to save for posterity. Here is a recap of the problem:
You're responsible for implementing a method called "sumExists." This method should read two arguments: an integer, and an array of integers. The return value will be a boolean, which is true if and only if there exists some combination of two distinct elements in the array that sum to the original integer.
For instance, suppose you call the method in this way:
sumExists(5, [7, 1, 2, 6, 4])
The array contains the elements 1 and 4, and 1+4 = 5, so the method should return true.
I immediately jumped to the most obvious solution  not because I expected it to be right, but because I wanted to find some answer quickly and then improve it. I said:
"If you wanted to use brute force, you could just inspect every element of the array, and compare it to every other element of the array, and see if they sum to the first number. But that would require you to perform O(n^{2}) operations, which is dumb. So I would like to find a better solution.
"Suppose that we sort the array. That would take O(n*log n) time (QuickSort, remember?) to complete. Where does that get us? Well, then we could quickly find the largest number and the smallest number of the array and compare them to each other. We'll put counters on the left and right ends of the array, and add those values. If they add up to the target, then we're done. (Note: in the example, the sorted array is [1, 2, 4, 6, 7]. The high and low numbers are 1 and 7, which add up to 8. Too high.)
"If the resulting number is too high, then the biggest number must be too big. So we'll move the right hand counter to the left and try adding a smaller number to the smallest. On the other hand, if the sum is too low, then the smallest number must be too small. So we'll move the left hand counter to the right and try again.
"By doing this we'll be able to test pairs of high and low numbers until either we find the sum we need, or the counters meet in the middle. If the sum is correct then we return true. Otherwise, return false.
"Scanning the list in this way requires only one pass, which takes O(n) time. But the sort was already longer, it takes O(n*log n) time. The time to scan a large array once would be much smaller than the time to sort, so we ignore that factor. This solution takes O(n*log n) time."
As I said, I wrote down the response and saved it. So this is more or less a direct quote: "I've been doing this for a long time, and there is not one person in a hundred who has given as thorough and correct an answer as you just did."
There were other difficulties with getting this job, based on my short experience with specific technologies. I'm convinced, though, that this was the best part of my series of interviews, and it got my foot wedged pretty firmly in the door.
Writing software is fundamentally different from working on an assembly line. You don't solve the same problem twice. You don't write the same code twice. If you are repeating yourself, then you are in some way failing to leverage past experience. You should be moving the duplicated code into a common function, or a common library, or something, so you can stop working on problems that have already been solved, and get the business of making your program do something new and interesting.
Because of this nonrepetitive approach, it's important to be able to think about every problem from multiple angles, to try and relate it to past experience and evaluate different kinds of solutions. Big O Notation is one tool for being able to evaluate how you're doing.
Wednesday, March 11, 2009
Count on this
Let's switch things up a bit and just do something for fun. How high can you count on your fingers? Ten, you say? Amateurs. I'm going to teach you how to count much, much higher... without even taking your shoes off.
However, you're going to have to learn to count in binary first. Again, this is a very basic topic to computer science students, but I hear that there are plenty of nonprogrammers who read this blog. For their benefit, I'm going to give a primer. If you already know binary, feel free to skip down to the next bold header.
When you count by tens, the digits can be represented as wheels on the odometer of your car. The rightmost wheel goes:
So anyway, things are pretty much the same from here on out, until we get up to:
Okay, that's base 10. Binary is a similar system, except that it's got a severe shortage of numerals. Whereas we were using all the numbers from zero to nine, in binary you only have two possible values: one, and zero. Like a light switch: on, or off. Circuit open, or circuit closed. That's how a computer works. But with just ones and zeroes, you can represent every positive integer if you have the patience.
Let's imagine we have an odometer like before, but this one has very small wheels. They flip from 0 to 1. Then they flip from 1 back to 0. When they flip to 0, they advance the next wheel. Okay?
So your odometer reads:
In base ten, every digit represents a value that is ten times the previous digit. So the first digit is worth one, the second digit is worth ten, the third digit is worth a hundred, the fourth is worth a thousand, and so on.
Similarly, in base two, every digit represents a value that is two times the previous digit.
That's really all there is to it. The highest you can go with this system is 1111111111 in binary, which is 1,023 in regular counting (2^{10}  1).
Other numbers that form obscene gestures to Americans: 128; 132
Numbers that may be obscene to Europeans: 6; 384; 390. They may also be interpreted as peace signs or a Richard Nixon impression.
What can you do with this? Not much. You can kill about ten to twenty minutes. You can also probably perform a cheap mindreading trick if you have an accomplice who can also do it. ("I'm thinking of a number from one to a thousand." "583. I certainly wasn't paying attention to the funny way you're holding your hands.") And finally, you can waste a perfectly good afternoon writing a silly blog post.
However, you're going to have to learn to count in binary first. Again, this is a very basic topic to computer science students, but I hear that there are plenty of nonprogrammers who read this blog. For their benefit, I'm going to give a primer. If you already know binary, feel free to skip down to the next bold header.
How to count in binary
When you count by tens, the digits can be represented as wheels on the odometer of your car. The rightmost wheel goes:
 1, 2, 3, 4, 5, 6, 7, 8, 9…
 11, 12, 13, 14, 15, 16, 17, 18, 19…
So anyway, things are pretty much the same from here on out, until we get up to:
 97, 98, 99…
Okay, that's base 10. Binary is a similar system, except that it's got a severe shortage of numerals. Whereas we were using all the numbers from zero to nine, in binary you only have two possible values: one, and zero. Like a light switch: on, or off. Circuit open, or circuit closed. That's how a computer works. But with just ones and zeroes, you can represent every positive integer if you have the patience.
Let's imagine we have an odometer like before, but this one has very small wheels. They flip from 0 to 1. Then they flip from 1 back to 0. When they flip to 0, they advance the next wheel. Okay?
So your odometer reads:
 0: 00000
 1: 00001
 2: 00010
 3: 00011
 4: 00100
In base ten, every digit represents a value that is ten times the previous digit. So the first digit is worth one, the second digit is worth ten, the third digit is worth a hundred, the fourth is worth a thousand, and so on.
Similarly, in base two, every digit represents a value that is two times the previous digit.
 1 > 1
 10 > 2
 100 > 4
 1000 > 8
 10000 > 16
 10110 = 16 + 4 + 2 = 22
How to count to 1,023 on your fingers
 Hold out your hands, palms upward. Now make them fists. No fingers are extended. That's zero.
 Extend your right thumb. The rightmost digit is now set to "on". That's one.
 Close up your thumb, but extend your right index finger. That's two.
 Extend your thumb again, and also leave your index finger out. Three.
 Close up your thumb and index fingers, but then extend your middle finger. That's four. It is also an obscene gesture to most people, so if you're in public, make sure your hands are under a desk or something.
That's really all there is to it. The highest you can go with this system is 1111111111 in binary, which is 1,023 in regular counting (2^{10}  1).
Other numbers that form obscene gestures to Americans: 128; 132
Numbers that may be obscene to Europeans: 6; 384; 390. They may also be interpreted as peace signs or a Richard Nixon impression.
What can you do with this? Not much. You can kill about ten to twenty minutes. You can also probably perform a cheap mindreading trick if you have an accomplice who can also do it. ("I'm thinking of a number from one to a thousand." "583. I certainly wasn't paying attention to the funny way you're holding your hands.") And finally, you can waste a perfectly good afternoon writing a silly blog post.
By the way... if you DID take your shoes off, you could count to 1,048,575.
Tuesday, March 10, 2009
Big O Notation 2: Sorting
As a CardCarrying Nerd (you have to mail away exactly 314 cereal box tops for your card) I find entertainment in some very unusual activities. For example, I like to execute algorithms by hand. When I find a pack of cards lying around, I often find myself shuffling them, dealing out about 1020, and operating a QuickSort on them. Sometimes when I have a stack of old bills and I want to eliminate duplicates while retaining the latest copy for my records, I'll even lay them out in a grid on the floor and perform a bucket sort.
Sorting algorithms are material from Intro to Computer Science 1, but I'm going to give a very condensed overview of the topic as background material to what I'm going to go over in a future post. If you're already rock solid on sorting algorithms, I suggest you skip or skim this one, and I promise I'll get to more practical discussions later.
You'll remember that in the last post on Big O, I explained that a "dumb dictionary search" (just viewing each word one at a time) requires O(n) time, while a more efficient search (which is also known as a "binary search") requires only O(log n) time, which is a lot shorter. But the binary search only works if the words are in some kind order already. If they're all scrambled up  if "xylophone" comes before "cereal" comes before "pheasant"  then you have no choice but to execute a dumb search. There's no other way to find your word besides looking at every entry to make sure you didn't miss any.
Therefore, it's often helpful to sort a list of things (such as dictionary words). Of course, sorting a dictionary might take longer than actually searching it. But if you have to search the dictionary multiple times, it's worth it. You eat the cost of sorting only once, but you get to do binary searches as many times as you want after that.
Suppose we take a pack of cards and deal out "n" cards at random. Let's make this quick and pick 5. I'll imagine they came out like this:
How long will it take to put these cards in the correct order? Well, naturally it can't take less than O(n) time. It's easy to see why. Suppose you had a magic function which automatically skimmed the entire list in no time at all to find the smallest number remaining. Then you could scan these cards and find the Ace first, and put it in the first spot. Then you find the 3, and put it in the second spot. And so on. Then the sequence of events would look like this:
This is the most efficient sort possible, but even with our magic function you still have to move each card to the correct location. There are n cards, so it takes n moves. So no matter what happens, we can't do better than O(n). (Technically, it took "n1" moves to finish, because once the first four cards are sorted, the king MUST be in the right place. There's nothing to swap with. But remember how I said before that constant multipliers don't matter? Subtracting a 1 definitely doesn't matter. In fact, we ignore any constants that are added to or subtracted from the outcome. Order of magnitude is all that counts.)
Unfortunately, we don't have a magic function that takes no time to find the Ace, so we actually have to read the cards in order to find it. How do we do this? Well, we could read the entire list manually, finding the Ace by inspecting every card. This is a technique known as a "selection sort," for the obvious reason that we are looking through the unsorted list to select the smallest number. Like in the dumb dictionary search, that would take O(n) reads to accomplish (technically n/2 on average, but constant multipliers don't matter).
But it's worse than that. Because we have to move n cards, AND we have to read n cards each time we move one. So we do n*n things, which means that this sort is O(n^{2}).
Terrible. Remember, our dictionary had 170,000 words. You square that, and you're doing almost thirty billion operations. There must be a better way.
There is, and one example is QuickSort. QuickSort is a recursive algorithm that is a little bit tedious to explain in a blog post, but it's so successful that it is implemented as a standard feature of many languages. If you type "sort @mylist" in Perl, you are instructing Perl to perform a QuickSort in most cases. If you have some time to kill, I recommend that you look up QuickSort on Wikipedia, lay out some cards, and try to figure out how it works. It's kind of cool.
QuickSort usually requires a time of O(n*log n), and you can't do much better than that. Well, there are some tricks you can use to speed it up if you know exactly what sort of data you're dealing with, even getting pretty close to O(n) in some cases. But for the most part, you're stuck with n*log n. And this makes sense: In the dictionary example we discovered that you can find a word in O(log n) time, and it takes O(n) time to move all the numbers, so in the best case we are going to find a number as quick as possible and then move it.
Sorting algorithms are material from Intro to Computer Science 1, but I'm going to give a very condensed overview of the topic as background material to what I'm going to go over in a future post. If you're already rock solid on sorting algorithms, I suggest you skip or skim this one, and I promise I'll get to more practical discussions later.
You'll remember that in the last post on Big O, I explained that a "dumb dictionary search" (just viewing each word one at a time) requires O(n) time, while a more efficient search (which is also known as a "binary search") requires only O(log n) time, which is a lot shorter. But the binary search only works if the words are in some kind order already. If they're all scrambled up  if "xylophone" comes before "cereal" comes before "pheasant"  then you have no choice but to execute a dumb search. There's no other way to find your word besides looking at every entry to make sure you didn't miss any.
Therefore, it's often helpful to sort a list of things (such as dictionary words). Of course, sorting a dictionary might take longer than actually searching it. But if you have to search the dictionary multiple times, it's worth it. You eat the cost of sorting only once, but you get to do binary searches as many times as you want after that.
Suppose we take a pack of cards and deal out "n" cards at random. Let's make this quick and pick 5. I'll imagine they came out like this:
 5, A, 7, K, 3
How long will it take to put these cards in the correct order? Well, naturally it can't take less than O(n) time. It's easy to see why. Suppose you had a magic function which automatically skimmed the entire list in no time at all to find the smallest number remaining. Then you could scan these cards and find the Ace first, and put it in the first spot. Then you find the 3, and put it in the second spot. And so on. Then the sequence of events would look like this:
 A, 5, 7, K, 3 (we put the ace in the first spot, but we needed to move the 5 out of the way, so we just swap their positions)
 A, 3, 7, K, 5 (3 swapped with 5)
 A, 3, 5, K, 7
 A, 3, 5, 7, K (sorted!)
This is the most efficient sort possible, but even with our magic function you still have to move each card to the correct location. There are n cards, so it takes n moves. So no matter what happens, we can't do better than O(n). (Technically, it took "n1" moves to finish, because once the first four cards are sorted, the king MUST be in the right place. There's nothing to swap with. But remember how I said before that constant multipliers don't matter? Subtracting a 1 definitely doesn't matter. In fact, we ignore any constants that are added to or subtracted from the outcome. Order of magnitude is all that counts.)
Unfortunately, we don't have a magic function that takes no time to find the Ace, so we actually have to read the cards in order to find it. How do we do this? Well, we could read the entire list manually, finding the Ace by inspecting every card. This is a technique known as a "selection sort," for the obvious reason that we are looking through the unsorted list to select the smallest number. Like in the dumb dictionary search, that would take O(n) reads to accomplish (technically n/2 on average, but constant multipliers don't matter).
But it's worse than that. Because we have to move n cards, AND we have to read n cards each time we move one. So we do n*n things, which means that this sort is O(n^{2}).
Terrible. Remember, our dictionary had 170,000 words. You square that, and you're doing almost thirty billion operations. There must be a better way.
There is, and one example is QuickSort. QuickSort is a recursive algorithm that is a little bit tedious to explain in a blog post, but it's so successful that it is implemented as a standard feature of many languages. If you type "sort @mylist" in Perl, you are instructing Perl to perform a QuickSort in most cases. If you have some time to kill, I recommend that you look up QuickSort on Wikipedia, lay out some cards, and try to figure out how it works. It's kind of cool.
QuickSort usually requires a time of O(n*log n), and you can't do much better than that. Well, there are some tricks you can use to speed it up if you know exactly what sort of data you're dealing with, even getting pretty close to O(n) in some cases. But for the most part, you're stuck with n*log n. And this makes sense: In the dictionary example we discovered that you can find a word in O(log n) time, and it takes O(n) time to move all the numbers, so in the best case we are going to find a number as quick as possible and then move it.
If you sort a 170,000 word dictionary in O(n*log n) time, how long will it take? Log 170,000 is about 5, and multiplied by 170,000 itself is about 900,000. That's not fast, but it's nowhere near as bad as 30 billion.
This is the last purely academic post I'm going to write about Big O notation. By now, I'm sure some readers are asking "What's with all this theory? Am I ever going to use this professionally?" My answer is: "You might. For one thing, you can use it to show off in job interviews." Next time I post about Big O, I'll explain how.
This is the last purely academic post I'm going to write about Big O notation. By now, I'm sure some readers are asking "What's with all this theory? Am I ever going to use this professionally?" My answer is: "You might. For one thing, you can use it to show off in job interviews." Next time I post about Big O, I'll explain how.
Thursday, March 5, 2009
Y2K supplemental
I was looking for a good article that explained Y2K from all angles  how the problem arose, and an why it ultimately wasn't as serious as people thought. I couldn't find it before posting, but I like this 1999 article by Tom Christiansen, a major contributor to the Perl programming language. Not only is Tom an excellent writer, but he accurately dismissed Y2K concerns as overblown before it hit. You gotta respect a guy with that kind of clear thinking, as well as the courage to voice an unpopular opinion.
A retrospective on the terrifying millenium bug
It's December 31, 1999. I've got some neighbors over at my house watching movies, and we're all waiting for the rollover. One of them asks me what time it is. I check my watch. "11:58," I say. "I guess it's time for the world to end. Want to go outside and watch the fireworks?" Everybody agrees.
We go outside, and there are fireworks, all right. My neighborhood is full of people who love fireworks, and they never fail to put on a good show twice a year, on New Year's Eve and Independent Day.
Glancing up, I wryly remark: "The street lights are still on. Maybe it has to be Pacific Time."
It's the end of the world as we know it
For those of you who are too young to remember or who didn't pay attention to the news from about 1997 onward, January 1, 2000 was the day that civilization as we know it was supposed to collapse. You see, many computer programs at that time stored information about dates as two digits numbers, which would confuse many computer systems at rollover. A year of "00" could just as easily mean "1900" as "2000."
Therefore, it was theoretically possible that numerous programs would break in a spectacular fashion, causing bills due in 2000 to be flagged as 100 years overdue. With interest. People spent several of the preceding years gloomily talking about how all the computers of the world were completely interconnected, and therefore any systems that did not address this problem might somehow corrupt all the other systems they were connected to. The Simpsons even devoted a hilarious segment of their annual Halloween special to Y2K in 1999.
Here's what one high profile Y2K alarmist, Gary North, had to say about Y2K (emphasis added):
Who knows where it will start? Only one thing seems certain: it WILL start. And when it does, every market institution and every government will suffer enormous setbacks. ...
Banking is the obvious domino. Here's another: shipping. What happens to cities if gasoline is unavailable to truckers? If the computers that control train schedules break down? If rail freight cars cannot be located by defective computers? Think about your supermarket's shelves.
What happens to production when "just in time production" becomes "bottleneck production"?
Here's another: farming. Modern commercial farming is tied to hybrid seeds. The plants produced by hybrid seeds produce seeds that will not produce healthy plants if planted. Every year, almost every large farm on earth must reorder another batch of hybrid seeds. If, for any reason, the seed companies fail, or the banks fail, farmers will not be able to plant anything. This will lead to a famine. Let's not hedge our words: FAMINE. There is no way today to get enough nonhybrid seeds into production in order to avoid this problem. If this is one of the dominoes, the result will be widespread starvation.
Hey, everyone! Remember that one time when the banks failed, governments collapsed, truckers were immobilized, people everywhere were starving, and civilization as we know it ended? Good times, good times.
In the immortal words of John Cleese, "I got better..."
Disclosure: Gary North was and is an extremely weird dude. He's a dominionist  a very special strain of religious zealot who believes that Democracy should be eliminated in favor of a Biblical variety of sharia law. Complete with public stonings. North has made a career out of predicting the end of the world at regular intervals, and in the late 90's he just happened to get more traction than usual.
Even so, panic over Y2K was widespread. I was teaching programming classes at that time, and the course requirements forced me to give students an essay and presentation assignment. Y2K was one of the possible topics I assigned. Three or four students chose to write about it, and every single one of their papers unambiguously predicted doom. Sometime in January 2000, I wrote a mass email to all the former students whose addresses I had, joking "Everyone who wrote about Y2K last year will now retroactively fail the class."
Funnily enough, I personally fixed a Y2K bug. I was working at the time for Bike.com, one of those multitudes of "dot com" companies that was ultimately doomed to fail, but hadn't gotten the memo yet. I wasn't a fulltime employee; I worked for Compuware, a development shop that farmed me and many other mercenaries out to various little companies for a limited time. As such, I had a grand view of many such companies during the infamous tech industry crash, but I didn't go down with any of those ships  at least not until Compuware ran out of work for me and then folded their Austin branch entirely.
Anyway, January 1 fell on a Saturday, so I didn't get back to work until the third. "We have a problem," my supervisor told me. "Some of the dates in the bike sales page say that they went online in the year 19100."
I hunted through the code for the offending page, and quickly figured out what was going on. The code was written in Perl. Dates in that language represent the year as an integer counting up from 1900. So the year "1998" is represented as "98," and so on. Some bright developer who came before me had decided that the proper way to write the full year was "19 . $year". The period is a concatenate operator, so this bit of code translates to "write the number 19 and then write the year." Obviously, when the year became "100" the long text representation became 19 followed by 100, or 19100.
That was easy to fix. I replaced the period with two zeroes and a plus sign, making it display "1900 + $year", because 1900 + 100 = 2000. I searched all the code for variations of the string "19 ." and performed the same trick everywhere. The whole operation took less than 20 minutes. Then I went to my supervisor and announced "Your Y2K problems are all fixed. That will be ten million dollars. In tens and twenties, please."
...And I feel fine
If there's a lesson to be drawn here, my take is that software systems are way more resilient than we give them credit for. People tend to have a certain degree of unreasoning superstition about computer programs. They are these mysterious and terrifying black boxes that are supposed to serve us, but occasionally fail in inscrutable ways for their own malicious reasons.
Minor tangent: Many people I talk to seem genuinely fearful that the scenario in Terminator or The Matrix really will play out someday; we'll wake up and discover that SkyNet has become intelligent, and also mad as hell at the human race, not to mention being a supergenius planner that can outsmart everyone on Earth. I always have to point out to these people that it took human beings two billion years of evolution to become as smart as they are. Even with the benefit of all that complex software in their brains, it takes every human child some eighteen years to explore the world, form up connections and memories, and become a fully functioning adult.
I kind of think we'll notice robots becoming selfaware pretty far in advance, and it will probably be on purpose. Plus, there's no reason to assume that the intelligent robots will naturally turn on their creators, just as we don't assume that every child will eventually desire to kill his parents. There are some outliers who become sociopaths and go on a mass murder spree, but most kids do actually grow up to be nonmaniacs.
For the time being, all our efforts at what we laughably call "Artificial Intelligence" still result in programs that are fairly dumb, and extremely narrow in their scope. Computers actually do what we tell them to do. We programmers frequently tell computers to do the wrong things, in the wrong ways, or fail to be sufficiently precise. But robust systems like banking are generally written in such a way as to be extremely modular, so that failures are local problems that don't destroy all the data, or accidentally transfer millions of dollars into your bank account. (Sorry, fans of Office Space... it's really not that easy to rip off a bank. They wouldn't stay in business very long if it were.)
Certainly there are cases where a misapplication of computers can do something so dangerous that it jeopardizes national security. Here's an interesting recent story of government secrets being sent to Iran via BitTorrent. But as you might expect, the computer itself is not the villain: like deliberately written viruses, stealing files in this manner is a deliberately designed spy technique that is executed to take advantage of existing software. It takes an intelligent agent to make computers do something harmful in a creative way.
So I remain highly skeptical of a catastrophic bug that will magically, accidentally, without deliberate effort, somehow cause a simultaneous collapse of millions of computer system worldwide. In the Simpsons episode I mentioned, all the computers in the world are fixed, but because Homer fails to fix his own particular system, that one failure somehow "infects" the rest of the internet. Comedy!
But the truth is a lot more mundane: individual programs screw up all the time. The better ones are written in a way to isolate and localize the errors. The worse ones either fix their problems or go obsolete.
Many companies were destroyed in the aftermath of 2000, but it wasn't the fault of the milennium bug; it was just another example of human error. In this case, it was poor business planning. Go to bike.com now if you want. There's no impressive web site with a massive database of bicycle sales. Instead, it's a small shell of a site that hosts a few thirdparty ads. This isn't the company I worked for; those guys probably went bankrupt long ago because very few people actually want to buy bikes on the internet. The domain has been taken over by a company whose business strategy is barely more ambitious than cybersquatting web sites.
So worry about failures due to ordinary hubris, sure. But don't worry about a massive robot uprising, or a simultaneous failure of all the world's software. And if you want to rip off investors, there's no need to count on obscure computer glitches, just do it the oldfashioned way: lobby to roll back industry regulations, invest in superrisky schemes, and then push your losses on the taxpayers on the grounds that your company is too big to fail.
Tuesday, March 3, 2009
Speed up your programs with Big O (part 1)
Part of what inspired me to do this blog was that I found myself explaining "Big O Notation" to two coworkers this week. My coworkers are smart people, but I was a little surprised to find that some aspects of computer science aren't common knowledge.
My intent is to make this blog accessible to software amateurs, but many readers will probably have heard this stuff before. Hopefully I can at least make the description entertaining, because I think it's an important one to be familiar with. If you happen to be one of those people who dimly remembers this topic from your "Intro to Computer Science" class, but forgot it promptly, it's time to learn it again. It matters.
Job interviewers often ask questions where they require you to make up a program that solves a programming puzzle. I have impressed the hell out of numerous interviewers by making up a solution, analyzing the solution in Big O, and then coming up with a more efficient solution. I didn't always get those jobs, but the feedback was always "You seem like an excellent candidate, very solid on the fundamentals, but you don't know enough of the specific technologies we work with." When I got the job at Digital Motorworks last year, the director of engineering flatout told me that mine was the best answer he ever heard to his question. So pay attention!
Suppose you are looking for a word in a printed dictionary. There are two ways you can do it, a dumb way and a smart way. The dumb way is: read the first word. Is that the word you want? No. Go to the next word. Is that the word you want? No. Go to the next word. And so on.
I just looked up how many words are in a typical dictionary. It's about 170,000. So how long will it take you to find the word you want? Well, it depends, of course. If you are looking for "aardvark," then you will have to look at only one word, or just a few. If you are looking for "zymurgy," then you will have to look at nearly all 170,000. On average, you can expect to read through 85,000 words: half of 170,000.
Nobody reads the dictionary that way, of course. Think about how you would really look up a word, such as "program." Open the book to the middle. You find the word "mouse." That is before the word you want, so divide the right side of the dictionary in half, pick another page, and look again. "Teacher." That is after the word you want, so divide the space between "mouse" and "teacher" in half and look again. And so on.
This technique is much more efficient than the dumb search, because it takes advantage of the fact that the dictionary is sorted for you. How long does it take? When you first open the dictionary, you are splitting the search space in half: instead of 170,000 words, you now have 85,000 words to look at. Then you split the search space in half again. With each page turn, you divide the number of words remaining for you to look at. So here's how it goes:
170,000 is a very specific number of words for a dictionary to have, so let's generalize this. The number of words in the dictionary is "n". How long did the dumb search take? We read on the order of n/2 words. Therefore, we say that solution took O(n/2) operations. That's all "Big O" means: "O" stands for "Order of." Nothing magical here, right?
How about the smart search? It took 18 searches, because
2^{18} > 170,000
(17 wouldn't be enough. Check.) If you fiddle with that equation, you'll find that
18 > log_{2} 170,000
So, to speak more generally, the smart dictionary search is O(log_{2} n).
With me so far? Now I'm going to get slightly tricky.
My intent is to make this blog accessible to software amateurs, but many readers will probably have heard this stuff before. Hopefully I can at least make the description entertaining, because I think it's an important one to be familiar with. If you happen to be one of those people who dimly remembers this topic from your "Intro to Computer Science" class, but forgot it promptly, it's time to learn it again. It matters.
Job interviewers often ask questions where they require you to make up a program that solves a programming puzzle. I have impressed the hell out of numerous interviewers by making up a solution, analyzing the solution in Big O, and then coming up with a more efficient solution. I didn't always get those jobs, but the feedback was always "You seem like an excellent candidate, very solid on the fundamentals, but you don't know enough of the specific technologies we work with." When I got the job at Digital Motorworks last year, the director of engineering flatout told me that mine was the best answer he ever heard to his question. So pay attention!
A "dumb" dictionary search
I just looked up how many words are in a typical dictionary. It's about 170,000. So how long will it take you to find the word you want? Well, it depends, of course. If you are looking for "aardvark," then you will have to look at only one word, or just a few. If you are looking for "zymurgy," then you will have to look at nearly all 170,000. On average, you can expect to read through 85,000 words: half of 170,000.
Get Smart
This technique is much more efficient than the dumb search, because it takes advantage of the fact that the dictionary is sorted for you. How long does it take? When you first open the dictionary, you are splitting the search space in half: instead of 170,000 words, you now have 85,000 words to look at. Then you split the search space in half again. With each page turn, you divide the number of words remaining for you to look at. So here's how it goes:
 170,000
 85,000
 42,500
 21,250
 10,625
 5,312
 2,656
 1,328
 664
 332
 166
 83
 41
 20
 10
 5
 2
 1
170,000 is a very specific number of words for a dictionary to have, so let's generalize this. The number of words in the dictionary is "n". How long did the dumb search take? We read on the order of n/2 words. Therefore, we say that solution took O(n/2) operations. That's all "Big O" means: "O" stands for "Order of." Nothing magical here, right?
How about the smart search? It took 18 searches, because
2^{18} > 170,000
(17 wouldn't be enough. Check.) If you fiddle with that equation, you'll find that
18 > log_{2} 170,000
So, to speak more generally, the smart dictionary search is O(log_{2} n).
With me so far? Now I'm going to get slightly tricky.
When I said the dumb search was O(n/2), the "/2" part is a constant multiplier: it represents the number 1/2, which is always the same. You take O(n) and you multiply it by one half. But really, we don't need to be that specific. We don't know exactly how long each of the operations will take, so when we are estimating compute time, it's pointless to factor in a constant. For instance, if you do 100 operations that each take half a second, or you do 50 operations that each take a whole second, it amounts to the same thing.
So, by convention, we ignore constant multipliers and don't include them. Instead of saying that the dumb search is O(n/2), let's just say that it takes O(n); all we care about is finding the right order of magnitude anyway.
Similarly, I said the smart search takes O(log_{2} n). But the "2" subscript in "log_{2}" is unnecessarily specific too; all we care about is that it's a log. So the smart search is O(log n) time.
Why should you care?
Let's consider some specific values of "n" and see how much time we save by using an O(log n) search instead of an O(n) search.
n=10 > log n=1
n=100 > log n=2
n=1,000 > log n=3
n=10,000 > log n=4
n=100,000 > log n = 5
n=1,000,000 > log n = 6
Obviously, if you are searching ten values, you can probably get away with scanning the entire list, but if you are searching a million values, you would rather have it take about 6 reads instead of a million.
As obvious as that is when I put it like that, I can tell you that there are a distressingly high number of times when I've sat down and analyzed code to discover that something which should easily take O(log n) time is written in such a way that it takes O(n) or even O(n^{2}). The reason for this is that the guy who came before me started out with a small list of items, and then failed to plan for a time when the data would grow out of control to a thousand items or more.
You should be planning for this stuff up front. You should be asking yourself "How many values am I dealing with now? How many values can I imagine dealing with in the future?" If your program is dealing with information about cars, for example, you should think about how many cars there are in the world, and what fraction of those cars will eventually be in your database if you're a big success.
Remember that if you don't plan for this, you will have to put in the effort to change it. But also, by the time it gets to that point, you will have forgotten where the O(n^{2}) function is, so you'll have to put in the effort of finding it and decoding the logic too. That's why you want to be mindful of future bottlenecks for your code as early as possible.
Next time I get around to posting about Big O, I'll be writing a little about sorting routines, interview challenges, and strategies to make basic logic more efficient  some or all of those, as time and post length allow.
Monday, March 2, 2009
I dream of recursion
This is a true story. I have told it to my programming classes every time I introduced the topic of recursion.
When I was a little kid, there was a toy that I desperately wanted. We'll just gloss over what it was because it's not in any way important to the story. So one night 
All right, all right, stop whining. I'll tell you. It was... a Ronald McDonald doll. That whistled. Yes, it had a little plastic whistle, and if you squeezed it then it would whistle. As seen on TV. Why in the world would I want that? I have no idea. I just remember that it seemed very important at the time.
So I wanted this Ronald McDonald doll. Very much. But I knew that my dad, who has always been a practical minded guy, would find this toy, well, stupid. So I didn't have the courage to ask him for it. But then to my surprise, one evening I came home from school, and my dad said, "Russell, today we're going to Toys R Us to pick up that Ronald McDonald you've been wanting."
Needless to say, I was ecstatic. We got in the car, drove to the store at the mall, picked up that toy, brought it home, and...
...Then I woke up.
As bitterly disappointed as I was to learn that it was all just a dream, I was filled with a new confidence. I just knew that my dad was going to buy that toy for me, because the dream had so foretold. So I went into the breakfast nook, where I found my dad waiting. I asked him if he would get me the Ronald McDonald doll that whistles, and he says "Surprise! I already got it for you! There it is!"
...And I woke up.
But after having been through this experience twice, I was more certain than ever that I would be getting my toy. Sure enough, I went to find my dad, asking "Did you REALLY get me that toy?" He said "I sure did! Look!" I actually got to play with the thing for a while, and then  all together now,
...I woke up.
I think you get the idea. It feels like the sequence repeated itself about ten times or so, although of course there's no way to tell. Each time the details of the nested dream were a little different, but the end result was always the same: I was confident that I'd get the toy, I did get the toy, and I woke up. And each time I woke up, the world seemed a little more "real" to me than the one in the previous dream.
Except for the last time. When I really woke up, I knew I was really awake. I didn't even need to bother asking my dad about it in the real world. I know reality.
I never did get that toy. That probably explains exactly why I am the bitter cynic I am today.
The tricky part about this story is still how to translate it from amusing anecdote to actual concept of computer science. If this were true recursion, I would probably remember going to sleep ten times in a row at the beginning, then getting the toy for the first time, and so on. Instead, I started in middream. But nobody ever remembers going to sleep anyway, so we can assume that happened in there somewhere. So if I had to pseudocode this story, it would look like this:
I have always found this the hardest thing for people to understand about recursion.
When I was a little kid, there was a toy that I desperately wanted. We'll just gloss over what it was because it's not in any way important to the story. So one night 
All right, all right, stop whining. I'll tell you. It was... a Ronald McDonald doll. That whistled. Yes, it had a little plastic whistle, and if you squeezed it then it would whistle. As seen on TV. Why in the world would I want that? I have no idea. I just remember that it seemed very important at the time.
So I wanted this Ronald McDonald doll. Very much. But I knew that my dad, who has always been a practical minded guy, would find this toy, well, stupid. So I didn't have the courage to ask him for it. But then to my surprise, one evening I came home from school, and my dad said, "Russell, today we're going to Toys R Us to pick up that Ronald McDonald you've been wanting."
Needless to say, I was ecstatic. We got in the car, drove to the store at the mall, picked up that toy, brought it home, and...
...Then I woke up.
As bitterly disappointed as I was to learn that it was all just a dream, I was filled with a new confidence. I just knew that my dad was going to buy that toy for me, because the dream had so foretold. So I went into the breakfast nook, where I found my dad waiting. I asked him if he would get me the Ronald McDonald doll that whistles, and he says "Surprise! I already got it for you! There it is!"
...And I woke up.
But after having been through this experience twice, I was more certain than ever that I would be getting my toy. Sure enough, I went to find my dad, asking "Did you REALLY get me that toy?" He said "I sure did! Look!" I actually got to play with the thing for a while, and then  all together now,
...I woke up.
I think you get the idea. It feels like the sequence repeated itself about ten times or so, although of course there's no way to tell. Each time the details of the nested dream were a little different, but the end result was always the same: I was confident that I'd get the toy, I did get the toy, and I woke up. And each time I woke up, the world seemed a little more "real" to me than the one in the previous dream.
Except for the last time. When I really woke up, I knew I was really awake. I didn't even need to bother asking my dad about it in the real world. I know reality.
I never did get that toy. That probably explains exactly why I am the bitter cynic I am today.
The tricky part about this story is still how to translate it from amusing anecdote to actual concept of computer science. If this were true recursion, I would probably remember going to sleep ten times in a row at the beginning, then getting the toy for the first time, and so on. Instead, I started in middream. But nobody ever remembers going to sleep anyway, so we can assume that happened in there somewhere. So if I had to pseudocode this story, it would look like this:
Wait... where's the actual waking up routine, you ask? Well, you don't need one. Reaching the end of the "dream" routine automatically drops you back into the conclusion of the "sleep" routine, and when "sleep" ends, you just wake up.class YoungRussell {
void sleep () {
lieDown(bed);
fallAsleep();
dream();
// perform operations associated with
// waking up here
}
void dream () {
if(someTerminatingCondition() == false) {
// no actions before falling asleep
// again in the dream
sleep();
}
StupidToy ronald = new StupidToy();
obtain(ronald);
playWith(ronald);
}
// ...
// auxiliary methods here
}
I have always found this the hardest thing for people to understand about recursion.
Subscribe to:
Posts (Atom)