Thursday, March 12, 2009

Big-O Notation 3: Showing off how you think

Some job interviewers ask you to describe your career history in great detail.  Some want you to to explain the most obscure facets of the language you'll be working with.  And some will give you a puzzle and ask you to design a program that solves it.

This last category is by far my favorite sort of opportunity in an interview.  I'm a programmer because I love solving puzzles, so showing off that love is far more likely to make me look good than struggling under pressure to remember whether it's ArrayList or Vector that is thread-safe.  (Tip: It's Vector.)

When you get asked a puzzle problem, remember above all else that the interviewer is much more keen to know how you think than whether you get the "right" answer.  There may be some specific technique that they are hoping you know.  But even if the answer is not immediately obvious, don't panic.  If you sit there and explain your reasoning, refining your technique out loud, you can make up a lot of ground by showing that you can think on your feet.

A good example is the question I got from the Director of Engineering at Digital Motorworks.  Not only did I get the job, but that portion of the interview went so smoothly that I wrote down the question, my answer, and his reaction to save for posterity.  Here is a recap of the problem:

You're responsible for implementing a method called "sumExists."  This method should read two arguments: an integer, and an array of integers.  The return value will be a boolean, which is true if and only if there exists some combination of two distinct elements in the array that sum to the original integer.

For instance, suppose you call the method in this way:
sumExists(5, [7, 1, 2, 6, 4])
The array contains the elements 1 and 4, and 1+4 = 5, so the method should return true.

I immediately jumped to the most obvious solution -- not because I expected it to be right, but because I wanted to find some answer quickly and then improve it.  I said:

"If you wanted to use brute force, you could just inspect every element of the array, and compare it to every other element of the array, and see if they sum to the first number.  But that would require you to perform O(n2) operations, which is dumb.  So I would like to find a better solution.

"Suppose that we sort the array.  That would take O(n*log n) time (QuickSort, remember?) to complete.  Where does that get us?  Well, then we could quickly find the largest number and the smallest number of the array and compare them to each other.  We'll put counters on the left and right ends of the array, and add those values.  If they add up to the target, then we're done.  (Note: in the example, the sorted array is [1, 2, 4, 6, 7].  The high and low numbers are 1 and 7, which add up to 8.  Too high.)

"If the resulting number is too high, then the biggest number must be too big.  So we'll move the right hand counter to the left and try adding a smaller number to the smallest.  On the other hand, if the sum is too low, then the smallest number must be too small.  So we'll move the left hand counter to the right and try again.

"By doing this we'll be able to test pairs of high and low numbers until either we find the sum we need, or the counters meet in the middle.  If the sum is correct then we return true.  Otherwise, return false.

"Scanning the list in this way requires only one pass, which takes O(n) time.  But the sort was already longer, it takes O(n*log n) time.  The time to scan a large array once would be much smaller than the time to sort, so we ignore that factor.  This solution takes O(n*log n) time."

As I said, I wrote down the response and saved it.  So this is more or less a direct quote: "I've been doing this for a long time, and there is not one person in a hundred who has given as thorough and correct an answer as you just did."

There were other difficulties with getting this job, based on my short experience with specific technologies.  I'm convinced, though, that this was the best part of my series of interviews, and it got my foot wedged pretty firmly in the door.

Writing software is fundamentally different from working on an assembly line.  You don't solve the same problem twice.  You don't write the same code twice.  If you are repeating yourself, then you are in some way failing to leverage past experience.  You should be moving the duplicated code into a common function, or a common library, or something, so you can stop working on problems that have already been solved, and get the business of making your program do something new and interesting.

Because of this non-repetitive approach, it's important to be able to think about every problem from multiple angles, to try and relate it to past experience and evaluate different kinds of solutions.  Big O Notation is one tool for being able to evaluate how you're doing.

6 comments:

  1. Nice story, K. Is this the sort of thing you learned in a class or self-study, or is this a sort of intuitive thing--more like a way of thinking--that you then applied whatever tools you picked up from study?

    I'm guessing it's the latter. I'm also thinking that developing that sort of problem-solving thinking style is far more difficult than remembering which command does what.

    This might be why my previous attempts to "learn programming" have always dribbled out to failure. Memorizing commands isn't easy for me, and learning how to use them is hard as well. But knowing when to use them and why is more difficult.

    ReplyDelete
  2. A large component of it definitely comes from practical experience. You can get the fundamentals from a book, but in some sense there's no substitute for repeatedly seeing the consequences of doing it the wrong way. I'm going to write another post in the future about real world examples where bad code causes very slow performance.

    However, "seeing the consequences" need not be restricted to real world experience. There are any number pretty good books about "Design Patterns"; also books like "The Practice of Programming" and the ever popular series, "Effective C++," "More Effective C++," and "Effective Java." Books like these show you an example of what bad code looks like, and then explains why it's bad and how to do better.

    It's kind of like my approach to arguing over a controversial topic like politics. Anyone can learn the basic vocabulary (i.e., names of politicians, parties, philosophies, and bills), but you'll be caught flat-footed in a discussion unless you read regularly and take some time to synthesize different points of view. Programming is no different.

    ReplyDelete
  3. This is the kind of thing that, when I first started to run upon it in my personal studies in CS, convinced me I was in the wrong field. I've been trying to get out of software ever since, but have failed so far.

    Nevertheless, this is still a fascinating aspect of Engineering and programming to examine and watch in action. It's the talent for synthesis - taking a set of ostensibly unrelated components and find new ways to relate and use them in order to solve a problem.

    I've never had that talent - my intellect is structured more like a dictionary. I have an almost limitless capacity to absorb and catalogue information, but a very limited ability to synthesize new information, strategies, etc. from a set of other components.

    I am better at problem solving as time goes on simply because of repeated exposure to pre-invented strategies (and how they work is really fascinating), but I'm poor enough at it for it to really hinder my career in computing science.
    That jives with what you said about experience.

    But I think I'd be a far better historian or perhaps literary critic, something along those lines.

    I'm still fascinated when I watch a good case of clever synthesis going on tho...

    LS

    ReplyDelete
  4. One thing I'd be interested in seeing you cover in the Big-O series is cases when some operations are more expensive than others.

    For instance, a coworker once pointed out to me that we tend to assume that adding two integers can be done in constant time. But this is only true if the two integers each fit into a register. If you're writing a library that handles arbitrarily-large numbers, with larger numbers taking up more bytes, the speed of addition depends on the size of the integers in question.

    I remember reading an article by Martin Gardner (I think) that presented an algorithm for having an elevator move people between floors as efficiently as possible (requiring the least amount of elevator movement). This turned out to be an analogy for sorting records in place on a magnetic tape. The cost of comparing or sorting data in memory was negligible compared to the time spent waiting for the tape to move.

    A CS instructor of mine presented an even more extreme case: some company decided that the best way to award performance bonuses was to present a group of directors with slides of two employees at a time and have them choose which of the two was better. In this example, comparisons are insanely expensive, and every other kind of operation is pretty much negligible.

    ReplyDelete
  5. According to this index:
    http://www.ms.uky.edu/~lee/ma502/gardner5/gardner5.html

    Gardner wrote about the elevator paradox in Knotted Doughnuts and Other Mathematical Entertainments, Chapter 10.

    Amazon has used copies for about a buck. http://preview.tinyurl.com/df868w

    ReplyDelete