Showing posts with label computer science fundamentals. Show all posts
Showing posts with label computer science fundamentals. Show all posts

Tuesday, August 2, 2016

Tech talk about Pokemon Go



Quick background for people who don't already know: Pokemon Go is a mobile game that launched last month on Android, and more recently on iPhones. It is billed as an "augmented reality game," in that the gameplay incorporates a requirement to walk around in the real world and visit physical locations, and some virtual objects are treated within the game as if they physically exist at those locations. It was immediately popular, and has been plagued with ongoing problems with server response and performance. In this post I will discuss these issues on a level that laymen should be able to understand.

Wednesday, September 14, 2011

A bit about data structures

I wanted to write another HackMaster post, but what I wanted to write about was the way I approached deciphering the data in the tables and converting them into data structures.  Then I skimmed through some older posts looking for reference points about data structures, and it occurred to me that I've never written any. In order to provide a foundation for the rest of the HackMaster breakdown, I'll have to digress and talk about structures in the abstract.

Whenever you are presented with a problem of modeling some numbers in conceptual space, the first thing you have to figure out before you write a single line of behavioral code is what kind of data structures you are going to use.  Going all the way back to the beginning of this blog, I've emphasized the importance of considering the efficiency of your design and the effect that it has on the Big-O performance of your program.  Thinking about proper data structures can buy you a lot of speed, and it can also make it really easy to visualize your program in small chunks as the complexity increases.

So what's a data structure?  The first thing programmers learn is how to use variables for individual chunks of information, like this:
int x = 3;
String str = "Hello world.";
 (Technically, of course, a String object in Java is a whole bunch of characters, which makes it a data structure in itself.  But the nice thing about object-oriented programming is that you don't have to think about it if you want to.)

To understand data structures, consider an array.  An array is one of the first slightly more advanced concepts that a beginning programmer will run into.  Instead of storing just one integer, it can store several.  For example, here's a simple representation of part of the fibonacci sequence:
int[] fib = new int[10];
fib[0] = 1;
fib[1] = 1;
fib[2] = 2;
fib[3] = 3;
fib[4] = 5;
fib[5] = 8;
fib[6] = 13;
fib[7] = 21;
fib[8] = 34;
fib[9] = 55;
When you create a single "int," you're asking the program to set aside a chunk of space in memory, large enough to hold one number.  When you create an array like this, you're asking the program instead of set aside a bigger chunk of memory ten times that size, plus (for some languages) a little bit of extra information about size constraints and such.

But arrays can be wasteful.  What if you want to set aside space that sometimes houses a hundred numbers, and sometimes houses just a few?  You could create an array of size 100, but most of the time that space would be wasted.  That's when you want to use a linked list, where you ask for new memory only at the moment that you actually need it.

I'm not dedicating this whole post to the implementation fundamentals of lists, but interested beginners should go check out the Wikipedia article to find out how this works.  (Sidebar: While relying on Wikipedia for information about controversial topics is often unwise, most of the technical topics that are covered are really good.)

Besides linked lists, there are lots of other data structures that you can use depending on your situation:
  • A tree (which may or may not be binary) will hierarchically organize information for you, much like the folder structure on your computer does, shortening the search time as long as you know where you are going.
  • A hash table or map is a structure which will find a value associated with a key, usually very quickly.  An example would be a dictionary search: you supply a word, and the program would retrieve a definition.
You can write your own versions of these structures, or if your language supports it, use predefined classes that create common structures.

Understanding what purpose the various structures serve, and when to use each one, is a very key skill in programming interviews.  Often when you are asked "How would you solve this problem?" the best answer is not to blurt the first notion that comes into your head, but to start applying data structures to model the problem space: lists (or specifically, stacks or queues), trees (binary or otherwise), tables (sometimes you can just assume the existence of a database, which is centered around associative tables).

When I hear a problem that lends itself to this, I usually make a beeline to the whiteboard and start thinking out loud: "You're asking about a list of items, so let's describe what's in an item first... then build a linked list out of items..."  Then I'll be either writing code to illustrate what I'm thinking, or (if the interview is shorter) just sketch out diagrams so that the interviewer understands the description and will probably accept that I know how to implement it.

Software is built a piece at a time.  If you start explaining how you visualize the problem in your head, you can give a much better insight into how you think than if you just start solving the problem directly.  In fact, if you start off strong with this approach but then go off on the wrong track, often the interviewer will be eager to guide you towards his concept of the solution because he's being carried along with your thought process.  This often changes the dynamic of the interview entirely.  Instead of being a room with an interrogator and a suspect, the interviewer may start thinking of himself as your ally and not your judge.  And that's exactly where you want to be when you're looking for work.

Digression's over.  Next time I'll illustrate this when I get back to decoding HackMaster tables.

Thursday, March 12, 2009

Big-O Notation 3: Showing off how you think

Some job interviewers ask you to describe your career history in great detail.  Some want you to to explain the most obscure facets of the language you'll be working with.  And some will give you a puzzle and ask you to design a program that solves it.

This last category is by far my favorite sort of opportunity in an interview.  I'm a programmer because I love solving puzzles, so showing off that love is far more likely to make me look good than struggling under pressure to remember whether it's ArrayList or Vector that is thread-safe.  (Tip: It's Vector.)

When you get asked a puzzle problem, remember above all else that the interviewer is much more keen to know how you think than whether you get the "right" answer.  There may be some specific technique that they are hoping you know.  But even if the answer is not immediately obvious, don't panic.  If you sit there and explain your reasoning, refining your technique out loud, you can make up a lot of ground by showing that you can think on your feet.

A good example is the question I got from the Director of Engineering at Digital Motorworks.  Not only did I get the job, but that portion of the interview went so smoothly that I wrote down the question, my answer, and his reaction to save for posterity.  Here is a recap of the problem:

You're responsible for implementing a method called "sumExists."  This method should read two arguments: an integer, and an array of integers.  The return value will be a boolean, which is true if and only if there exists some combination of two distinct elements in the array that sum to the original integer.

For instance, suppose you call the method in this way:
sumExists(5, [7, 1, 2, 6, 4])
The array contains the elements 1 and 4, and 1+4 = 5, so the method should return true.

I immediately jumped to the most obvious solution -- not because I expected it to be right, but because I wanted to find some answer quickly and then improve it.  I said:

"If you wanted to use brute force, you could just inspect every element of the array, and compare it to every other element of the array, and see if they sum to the first number.  But that would require you to perform O(n2) operations, which is dumb.  So I would like to find a better solution.

"Suppose that we sort the array.  That would take O(n*log n) time (QuickSort, remember?) to complete.  Where does that get us?  Well, then we could quickly find the largest number and the smallest number of the array and compare them to each other.  We'll put counters on the left and right ends of the array, and add those values.  If they add up to the target, then we're done.  (Note: in the example, the sorted array is [1, 2, 4, 6, 7].  The high and low numbers are 1 and 7, which add up to 8.  Too high.)

"If the resulting number is too high, then the biggest number must be too big.  So we'll move the right hand counter to the left and try adding a smaller number to the smallest.  On the other hand, if the sum is too low, then the smallest number must be too small.  So we'll move the left hand counter to the right and try again.

"By doing this we'll be able to test pairs of high and low numbers until either we find the sum we need, or the counters meet in the middle.  If the sum is correct then we return true.  Otherwise, return false.

"Scanning the list in this way requires only one pass, which takes O(n) time.  But the sort was already longer, it takes O(n*log n) time.  The time to scan a large array once would be much smaller than the time to sort, so we ignore that factor.  This solution takes O(n*log n) time."

As I said, I wrote down the response and saved it.  So this is more or less a direct quote: "I've been doing this for a long time, and there is not one person in a hundred who has given as thorough and correct an answer as you just did."

There were other difficulties with getting this job, based on my short experience with specific technologies.  I'm convinced, though, that this was the best part of my series of interviews, and it got my foot wedged pretty firmly in the door.

Writing software is fundamentally different from working on an assembly line.  You don't solve the same problem twice.  You don't write the same code twice.  If you are repeating yourself, then you are in some way failing to leverage past experience.  You should be moving the duplicated code into a common function, or a common library, or something, so you can stop working on problems that have already been solved, and get the business of making your program do something new and interesting.

Because of this non-repetitive approach, it's important to be able to think about every problem from multiple angles, to try and relate it to past experience and evaluate different kinds of solutions.  Big O Notation is one tool for being able to evaluate how you're doing.

Wednesday, March 11, 2009

Count on this

Let's switch things up a bit and just do something for fun.  How high can you count on your fingers? Ten, you say? Amateurs. I'm going to teach you how to count much, much higher... without even taking your shoes off.

However, you're going to have to learn to count in binary first. Again, this is a very basic topic to computer science students, but I hear that there are plenty of non-programmers who read this blog. For their benefit, I'm going to give a primer. If you already know binary, feel free to skip down to the next bold header.

How to count in binary

When you count by tens, the digits can be represented as wheels on the odometer of your car. The right-most wheel goes:
  • 1, 2, 3, 4, 5, 6, 7, 8, 9…
Then what happens? There is no "10" printed on the wheel. So where do we go next? The rightmost wheel flips from "9" back to "0", and as that happens, it also pushes the next wheel up by 1. So you have a 1 and a 0, and that's ten.
  • 11, 12, 13, 14, 15, 16, 17, 18, 19…
Then the first wheel again flips over from 9 to 0, which in turn advances the second wheel from 1 to 2. Twenty. Right? Don't tell me this is hard, my son Ben understands this and he's in first grade.

So anyway, things are pretty much the same from here on out, until we get up to:
  • 97, 98, 99…
And now? Well, the first wheel flips to zero, which in turn advances the second wheel. But the second wheel is already maxed out at nine, so it flips to zero, which also advances the third wheel, so now that wheel reads "1". 1, 0, 0. A hundred. Got it?

Okay, that's base 10. Binary is a similar system, except that it's got a severe shortage of numerals. Whereas we were using all the numbers from zero to nine, in binary you only have two possible values: one, and zero. Like a light switch: on, or off. Circuit open, or circuit closed. That's how a computer works. But with just ones and zeroes, you can represent every positive integer if you have the patience.

Let's imagine we have an odometer like before, but this one has very small wheels. They flip from 0 to 1. Then they flip from 1 back to 0. When they flip to 0, they advance the next wheel. Okay?

So your odometer reads:
  • 0: 00000
Advance the rightmost counter by one:
  • 1: 00001
Advance the counter again, and it flips to zero while dragging the second wheel along:
  • 2: 00010
So that's two. Ten is two, in binary. Makes perfect sense, right? Advance the right hand wheel one more.
  • 3: 00011
Next time we advance, both of the first two wheels will advance the next one over.
  • 4: 00100
One hundred is four. I think you can take it from here.

In base ten, every digit represents a value that is ten times the previous digit. So the first digit is worth one, the second digit is worth ten, the third digit is worth a hundred, the fourth is worth a thousand, and so on.

Similarly, in base two, every digit represents a value that is two times the previous digit.
  • 1 -> 1
  • 10 -> 2
  • 100 -> 4
  • 1000 -> 8
  • 10000 -> 16
Makes sense, right? You can write out any string of ones and zeroes and figure out what number they represent in binary by adding up the digits. For example,
  • 10110 = 16 + 4 + 2 = 22
Okay, here's the fun part.

How to count to 1,023 on your fingers

  • Hold out your hands, palms upward. Now make them fists. No fingers are extended. That's zero.
  • Extend your right thumb. The right-most digit is now set to "on". That's one.
  • Close up your thumb, but extend your right index finger. That's two.
  • Extend your thumb again, and also leave your index finger out. Three.
  • Close up your thumb and index fingers, but then extend your middle finger. That's four. It is also an obscene gesture to most people, so if you're in public, make sure your hands are under a desk or something.

That's really all there is to it. The highest you can go with this system is 1111111111 in binary, which is 1,023 in regular counting (210 - 1).

Other numbers that form obscene gestures to Americans: 128; 132

Numbers that may be obscene to Europeans: 6; 384; 390. They may also be interpreted as peace signs or a Richard Nixon impression.

What can you do with this? Not much. You can kill about ten to twenty minutes. You can also probably perform a cheap mind-reading trick if you have an accomplice who can also do it. ("I'm thinking of a number from one to a thousand." "583. I certainly wasn't paying attention to the funny way you're holding your hands.") And finally, you can waste a perfectly good afternoon writing a silly blog post.

By the way... if you DID take your shoes off, you could count to 1,048,575.

Tuesday, March 10, 2009

Big O Notation 2: Sorting

As a Card-Carrying Nerd (you have to mail away exactly 314 cereal box tops for your card) I find entertainment in some very unusual activities.  For example, I like to execute algorithms by hand. When I find a pack of cards lying around, I often find myself shuffling them, dealing out about 10-20, and operating a QuickSort on them. Sometimes when I have a stack of old bills and I want to eliminate duplicates while retaining the latest copy for my records, I'll even lay them out in a grid on the floor and perform a bucket sort.

Sorting algorithms are material from Intro to Computer Science 1, but I'm going to give a very condensed overview of the topic as background material to what I'm going to go over in a future post. If you're already rock solid on sorting algorithms, I suggest you skip or skim this one, and I promise I'll get to more practical discussions later.

You'll remember that in the last post on Big O, I explained that a "dumb dictionary search" (just viewing each word one at a time) requires O(n) time, while a more efficient search (which is also known as a "binary search") requires only O(log n) time, which is a lot shorter. But the binary search only works if the words are in some kind order already. If they're all scrambled up -- if "xylophone" comes before "cereal" comes before "pheasant" -- then you have no choice but to execute a dumb search. There's no other way to find your word besides looking at every entry to make sure you didn't miss any.

Therefore, it's often helpful to sort a list of things (such as dictionary words). Of course, sorting a dictionary might take longer than actually searching it. But if you have to search the dictionary multiple times, it's worth it. You eat the cost of sorting only once, but you get to do binary searches as many times as you want after that.

Suppose we take a pack of cards and deal out "n" cards at random. Let's make this quick and pick 5. I'll imagine they came out like this:

  • 5, A, 7, K, 3

How long will it take to put these cards in the correct order? Well, naturally it can't take less than O(n) time. It's easy to see why. Suppose you had a magic function which automatically skimmed the entire list in no time at all to find the smallest number remaining. Then you could scan these cards and find the Ace first, and put it in the first spot. Then you find the 3, and put it in the second spot. And so on. Then the sequence of events would look like this:

  • A, 5, 7, K, 3 (we put the ace in the first spot, but we needed to move the 5 out of the way, so we just swap their positions)
  • A, 3, 7, K, 5 (3 swapped with 5)
  • A, 3, 5, K, 7
  • A, 3, 5, 7, K (sorted!) 

This is the most efficient sort possible, but even with our magic function you still have to move each card to the correct location. There are n cards, so it takes n moves. So no matter what happens, we can't do better than O(n). (Technically, it took "n-1" moves to finish, because once the first four cards are sorted, the king MUST be in the right place. There's nothing to swap with. But remember how I said before that constant multipliers don't matter? Subtracting a 1 definitely doesn't matter. In fact, we ignore any constants that are added to or subtracted from the outcome. Order of magnitude is all that counts.)

Unfortunately, we don't have a magic function that takes no time to find the Ace, so we actually have to read the cards in order to find it. How do we do this? Well, we could read the entire list manually, finding the Ace by inspecting every card. This is a technique known as a "selection sort," for the obvious reason that we are looking through the unsorted list to select the smallest number. Like in the dumb dictionary search, that would take O(n) reads to accomplish (technically n/2 on average, but constant multipliers don't matter).

But it's worse than that. Because we have to move n cards, AND we have to read n cards each time we move one. So we do n*n things, which means that this sort is O(n2).

Terrible. Remember, our dictionary had 170,000 words. You square that, and you're doing almost thirty billion operations. There must be a better way.

There is, and one example is QuickSort. QuickSort is a recursive algorithm that is a little bit tedious to explain in a blog post, but it's so successful that it is implemented as a standard feature of many languages. If you type "sort @mylist" in Perl, you are instructing Perl to perform a QuickSort in most cases. If you have some time to kill, I recommend that you look up QuickSort on Wikipedia, lay out some cards, and try to figure out how it works. It's kind of cool.

QuickSort usually requires a time of O(n*log n), and you can't do much better than that. Well, there are some tricks you can use to speed it up if you know exactly what sort of data you're dealing with, even getting pretty close to O(n) in some cases. But for the most part, you're stuck with n*log n. And this makes sense: In the dictionary example we discovered that you can find a word in O(log n) time, and it takes O(n) time to move all the numbers, so in the best case we are going to find a number as quick as possible and then move it.

If you sort a 170,000 word dictionary in O(n*log n) time, how long will it take?  Log 170,000 is about 5, and multiplied by 170,000 itself is about 900,000.  That's not fast, but it's nowhere near as bad as 30 billion.

This is the last purely academic post I'm going to write about Big O notation. By now, I'm sure some readers are asking "What's with all this theory? Am I ever going to use this professionally?"  My answer is: "You might.  For one thing, you can use it to show off in job interviews."  Next time I post about Big O, I'll explain how.

Tuesday, March 3, 2009

Speed up your programs with Big O (part 1)

Part of what inspired me to do this blog was that I found myself explaining "Big O Notation" to two coworkers this week. My coworkers are smart people, but I was a little surprised to find that some aspects of computer science aren't common knowledge.

My intent is to make this blog accessible to software amateurs, but many readers will probably have heard this stuff before. Hopefully I can at least make the description entertaining, because I think it's an important one to be familiar with. If you happen to be one of those people who dimly remembers this topic from your "Intro to Computer Science" class, but forgot it promptly, it's time to learn it again. It matters.

Job interviewers often ask questions where they require you to make up a program that solves a programming puzzle. I have impressed the hell out of numerous interviewers by making up a solution, analyzing the solution in Big O, and then coming up with a more efficient solution. I didn't always get those jobs, but the feedback was always "You seem like an excellent candidate, very solid on the fundamentals, but you don't know enough of the specific technologies we work with." When I got the job at Digital Motorworks last year, the director of engineering flat-out told me that mine was the best answer he ever heard to his question. So pay attention!

A "dumb" dictionary search

Suppose you are looking for a word in a printed dictionary. There are two ways you can do it, a dumb way and a smart way. The dumb way is: read the first word. Is that the word you want? No. Go to the next word. Is that the word you want? No. Go to the next word. And so on.

I just looked up how many words are in a typical dictionary. It's about 170,000. So how long will it take you to find the word you want? Well, it depends, of course. If you are looking for "aardvark," then you will have to look at only one word, or just a few. If you are looking for "zymurgy," then you will have to look at nearly all 170,000. On average, you can expect to read through 85,000 words: half of 170,000.

Get Smart

Nobody reads the dictionary that way, of course. Think about how you would really look up a word, such as "program." Open the book to the middle. You find the word "mouse." That is before the word you want, so divide the right side of the dictionary in half, pick another page, and look again. "Teacher." That is after the word you want, so divide the space between "mouse" and "teacher" in half and look again. And so on.

This technique is much more efficient than the dumb search, because it takes advantage of the fact that the dictionary is sorted for you. How long does it take? When you first open the dictionary, you are splitting the search space in half: instead of 170,000 words, you now have 85,000 words to look at. Then you split the search space in half again. With each page turn, you divide the number of words remaining for you to look at. So here's how it goes:

  1. 170,000
  2. 85,000
  3. 42,500
  4. 21,250
  5. 10,625
  6. 5,312
  7. 2,656
  8. 1,328
  9. 664
  10. 332
  11. 166
  12. 83
  13. 41
  14. 20
  15. 10
  16. 5
  17. 2
  18. 1
18 Words we looked at. Instead of 85,000. Much better, right?

170,000 is a very specific number of words for a dictionary to have, so let's generalize this. The number of words in the dictionary is "n". How long did the dumb search take? We read on the order of n/2 words. Therefore, we say that solution took O(n/2) operations. That's all "Big O" means: "O" stands for "Order of." Nothing magical here, right?

How about the smart search? It took 18 searches, because
218 > 170,000
(17 wouldn't be enough. Check.) If you fiddle with that equation, you'll find that
18 > log2 170,000

So, to speak more generally, the smart dictionary search is O(log2 n).

With me so far?  Now I'm going to get slightly tricky.

When I said the dumb search was O(n/2), the "/2" part is a constant multiplier: it represents the number 1/2, which is always the same.  You take O(n) and you multiply it by one half.  But really, we don't need to be that specific.  We don't know exactly how long each of the operations will take, so when we are estimating compute time, it's pointless to factor in a constant.  For instance, if you do 100 operations that each take half a second, or you do 50 operations that each take a whole second, it amounts to the same thing.

So, by convention, we ignore constant multipliers and don't include them.  Instead of saying that the dumb search is O(n/2), let's just say that it takes O(n); all we care about is finding the right order of magnitude anyway.

Similarly, I said the smart search takes O(log2 n).  But the "2" subscript in "log2" is unnecessarily specific too; all we care about is that it's a log.  So the smart search is O(log n) time.

Why should you care?

Let's consider some specific values of "n" and see how much time we save by using an O(log n) search instead of an O(n) search.
n=10 -> log n=1
n=100 -> log n=2
n=1,000 -> log n=3
n=10,000 -> log n=4
n=100,000 -> log n = 5
n=1,000,000 -> log n = 6

Obviously, if you are searching ten values, you can probably get away with scanning the entire list, but if you are searching a million values, you would rather have it take about 6 reads instead of a million.

As obvious as that is when I put it like that, I can tell you that there are a distressingly high number of times when I've sat down and analyzed code to discover that something which should easily take O(log n) time is written in such a way that it takes O(n) or even O(n2).  The reason for this is that the guy who came before me started out with a small list of items, and then failed to plan for a time when the data would grow out of control to a thousand items or more.

You should be planning for this stuff up front.  You should be asking yourself "How many values am I dealing with now?  How many values can I imagine dealing with in the future?"  If your program is dealing with information about cars, for example, you should think about how many cars there are in the world, and what fraction of those cars will eventually be in your database if you're a big success.

Remember that if you don't plan for this, you will have to put in the effort to change it.  But also, by the time it gets to that point, you will have forgotten where the O(n2) function is, so you'll have to put in the effort of finding it and decoding the logic too.  That's why you want to be mindful of future bottlenecks for your code as early as possible.

Next time I get around to posting about Big O, I'll be writing a little about sorting routines, interview challenges, and strategies to make basic logic more efficient -- some or all of those, as time and post length allow.

Monday, March 2, 2009

I dream of recursion

This is a true story. I have told it to my programming classes every time I introduced the topic of recursion.

When I was a little kid, there was a toy that I desperately wanted. We'll just gloss over what it was because it's not in any way important to the story. So one night --

All right, all right, stop whining. I'll tell you. It was... a Ronald McDonald doll. That whistled. Yes, it had a little plastic whistle, and if you squeezed it then it would whistle. As seen on TV. Why in the world would I want that? I have no idea. I just remember that it seemed very important at the time.

So I wanted this Ronald McDonald doll. Very much. But I knew that my dad, who has always been a practical minded guy, would find this toy, well, stupid. So I didn't have the courage to ask him for it. But then to my surprise, one evening I came home from school, and my dad said, "Russell, today we're going to Toys R Us to pick up that Ronald McDonald you've been wanting."

Needless to say, I was ecstatic. We got in the car, drove to the store at the mall, picked up that toy, brought it home, and...

...Then I woke up.

As bitterly disappointed as I was to learn that it was all just a dream, I was filled with a new confidence. I just knew that my dad was going to buy that toy for me, because the dream had so foretold. So I went into the breakfast nook, where I found my dad waiting. I asked him if he would get me the Ronald McDonald doll that whistles, and he says "Surprise! I already got it for you! There it is!"

...And I woke up.

But after having been through this experience twice, I was more certain than ever that I would be getting my toy. Sure enough, I went to find my dad, asking "Did you REALLY get me that toy?" He said "I sure did! Look!" I actually got to play with the thing for a while, and then -- all together now,

...I woke up.

I think you get the idea. It feels like the sequence repeated itself about ten times or so, although of course there's no way to tell. Each time the details of the nested dream were a little different, but the end result was always the same: I was confident that I'd get the toy, I did get the toy, and I woke up. And each time I woke up, the world seemed a little more "real" to me than the one in the previous dream.

Except for the last time. When I really woke up, I knew I was really awake. I didn't even need to bother asking my dad about it in the real world. I know reality.

I never did get that toy. That probably explains exactly why I am the bitter cynic I am today.

The tricky part about this story is still how to translate it from amusing anecdote to actual concept of computer science. If this were true recursion, I would probably remember going to sleep ten times in a row at the beginning, then getting the toy for the first time, and so on. Instead, I started in mid-dream. But nobody ever remembers going to sleep anyway, so we can assume that happened in there somewhere. So if I had to pseudo-code this story, it would look like this:

class YoungRussell {
void sleep () {
lieDown(bed);
fallAsleep();
dream();

// perform operations associated with
// waking up here
}

void dream () {
if(someTerminatingCondition() == false) {
// no actions before falling asleep
// again in the dream
sleep();
}
StupidToy ronald = new StupidToy();
obtain(ronald);
playWith(ronald);
}

// ...
// auxiliary methods here
}
Wait... where's the actual waking up routine, you ask? Well, you don't need one. Reaching the end of the "dream" routine automatically drops you back into the conclusion of the "sleep" routine, and when "sleep" ends, you just wake up.

I have always found this the hardest thing for people to understand about recursion.