One of my favorite puzzle books, "Aha! Gotcha" by Martin Gardner, used the game of "Chuck a Luck" to illustrate some basic probability concepts. You can read the rules and history here.
In a nutshell, the player picks a number from one to six and then places a $1 bet and rolls three dice. If your number doesn't come up, you lose the dollar. If it does come up, you keep your dollar and win $1 for each die that shows your number.
A program which implemented these rules might look like this:
What number do you bet on? 3
You rolled: 4 6 2
You lose $1!
What number do you bet on? 2
You rolled: 2 1 5
You win $1!
What number do you bet on? 1
You rolled: 1 3 1
You win $2!
Gardner concocted a rationalization for a gambler who believed that the game favors the player, and then challenged the reader to show why the argument is faulty. But if you actually write such a program, you could show empirically the the game is stacked against you.
I suggest that you do this in four phases.
Phase one: Implement a program that has output similar to the above.
Phase two: Keep track of how much total money the player has won or lost.
Phase three: Ask the player how many games he wants to play automatically. Like this:
What number do you bet on? 1
How many games do you want to play? 10
[optional: display the outcome of each game]
...You lost a total of $3!
Phase four: Have your program play a very large number of games, and calculate the average amount you lost per game.
Let me know how it goes.