The Monty Hall Problem is where Monty presents you with three doors, one of which contains a prize. He asks you to pick one door, which remains closed. Monty opens one of the other doors that does not have the prize. This process leaves two unopened doors—your original choice and one other. He allows you to switch from your initial choice to the other unopened door. Do you accept the offer?

If you accept his offer to switch doors, you’re twice as likely to win—66% versus 33%—than if you stay with your original choice.

Mind-blowing, right?

The solution to the Monty Hall Problem is tricky and counter-intuitive. It did trip up many experts back in the 1980s. However, the correct answer to the Monty Hall Problem is now well established using a variety of methods. It has been proven mathematically, with computer simulations, and empirical experiments, including on television by both the Mythbusters (CONFIRMED!) and James Mays’ Man Lab. You won’t find any statisticians who disagree with the solution.

In this post, I’ll explore aspects of this problem that have arisen in discussions with some stubborn resisters to the notion that you can increase your chances of winning by switching!

The Monty Hall problem provides a fun way to explore issues that relate to hypothesis testing. I’ve got a lot of fun lined up for this post, including the following!

- Using a computer simulation to play the game 10,000 times.
- Assessing sampling distributions to compare the 66% percent hypothesis to another contender.
- Performing a power and sample size analysis to determine the number of times you need to play the Monty Hall game to get an answer.
- Conducting an experiment by playing the game repeatedly myself, record the results, and use a proportions hypothesis test to draw conclusions!

I won’t re-explain the logic behind how the Monty Hall Problem works in this post. To learn about that, read my other post about the Monty Hall Problem.

## Motivations for Writing this Post

Despite the universal acceptance among statisticians, there are stubborn resisters to the solution. They’re convinced that you have a 50% chance of winning by switching rather than a 66% chance. My response has been, “test it empirically by playing it yourself!” It’s a simple enough experiment to perform on your own. You just need a friend and play the game multiple times. It sounds fun!

However, it dawned on me that, in the minds of the doubters, we need to establish that the winning percentage is 66% rather than 50%. Of course, in reality, the difference is 33% to 66%, but they disagree with that notion. That 16% difference is small enough to be difficult to detect through an experiment. A small sample size won’t reveal that difference with an acceptable degree of confidence.

I’ll use this problem to highlight various statistical concepts that relate to answering this question. Plus, it gives me another opportunity to use the Statistics 101 simulation giftware, which I love using and recommend! It’s free to use, but they do ask for a donation. I’ve used this application to illustrate how both bootstrapping and the central limit theorem works in statistics.

I’ll start by using Statistics 101 to simulate the Monty Hall Problem thousands of times to show the solution that way. Then, I’ll highlight the difficulty in discriminating between a 50% and 66% chance of winning using sampling distributions. I’ll also perform a power and sample size analysis to determine what sample size I should recommend to the doubters! Finally, I’ll conduct an empirical experiment and use a hypothesis test.

## Simulating the Monty Hall Problem

Conveniently, the creators of Statistics 101 include a variety of example scripts with their software, including one that simulates the Monty Hall Problem. Now, I’ve been told by deniers that any simulation that shows switching produces a 66% chance of winning must be a case of “garbage in, garbage out.” So, I’ll briefly explain the script below. It’s pretty straightforward . . . and garbage free!

- Defines the door arrangement.
- Specifies the number of times to play the game.
- The software randomly assigns the prize to one of the doors.
- The simulated contestant randomly chooses a door.
- The software records the result for both staying and switching.

After that, it’s a simple matter to take the two counts of wins and convert them to winning percentages for both staying and switching.

So, with no further ado, let’s run the Monty Hall game 10,000 times. And, the answer is!

After playing the game 10,000 times and switching every time, the simulated contestant won 66.36% of the times. Not far from the predicted percentage at all!

## Distinguishing between Winning Percentages of 50% and 66%

Case closed, right? I can still hear some complaints of “garbage in, garbage out”—even though that’s not true. Consequently, let’s suppose we still want to demonstrate this solution empirically. As I mentioned earlier, it can be difficult detecting that small difference.

In comparing winning percentages of 66% to 50%, the difficulty is that we’re dealing with binary data (Win/Lose) and a binomial distribution of outcomes. 50 and 66 are just the average percentage or expected value that you expect over many repetitions. However, just like flipping a coin multiple times, there’s a distribution of outcomes around the mean. If you flip a coin 10 times, you don’t expect that it’ll always be heads-up precisely 50% of the time. If you play the game, always switch, and end up winning 58% of the time, how do you determine whether the expected winning percentage was 50% or 66%?

That’s what we’re going to tackle in this post!

## Graphing the Sampling Distributions for the Monty Hall Problem

I could use the binomial distribution to illustrate how this works. However, to use that approach, I’d need to enter the expected winning percentage for switching—which is under contention. So, in the spirit of not assuming the 66% value to be true, I’ll continue using the simulation software and have it play the game and create the sampling distributions using the game outcomes. To do this, I’ve modified the script that they supply to run samples of different sizes many different times. This process allows us to see the distribution of sample winning percentages in a similar process that I’ve used for my posts about bootstrapping and the central limit theorem.

These distributions are sampling distributions and highlight the spread of sample winning percentages you expect for samples of different sizes. I’ll use sample sizes of 10, 25, 50, 100, and 400. Keep in mind that I’m running each sample size 100,000 times. For instance, the software will run the experiment with 10 trials per sample, calculate the winning percentage over those 10 trials, save the winning percentage, and then repeats that process for a total of 100,000 times. Then, the software graphs the distribution of winning percentages for all 100,000 samples. Then, I modify the script to have it do the same with samples that contain 25 trials, and so on.

With all that in mind, the following graphs show the sampling distributions of winning percentages for the Monty Hall game as it is actually played in the simulator and for an alternative game where the expected winning percentage is 50%. The graphs show how the outcomes can overlap, which makes distinguishing the correct winning percentage difficult. This problem is particularly evident with smaller sample sizes where the distribution spreads are wider. As you increase the sample size, the spreads narrow and it becomes easier determining which process produces an observed outcome.

Download my script, MontyHallSamplingDistributions, if you want to try it yourself in Statistics 101. See my comments starting on line 14 for notes about using it.

## Understanding the Sampling Distribution Graphs

In the following graphs, each bar represents a winning percentage. These are discrete distributions because there are a limited number of possible winning percentages. For example, with a sample size of 10, winning percentages can be only (1/10) 10%, (2/10) 20%, (3/10) 30%, and so on. The height of the bar represents the probability of obtaining that particular winning percentage.

The grey bars represent the 50% chance of winning process I’ve added to the script, which is essentially a coin toss. The red bars represent the simulated Monty Hall game for a player who always switches. Notice how the gray bars center on 50% while the red bars center on 66%.

The goal of these graphs is to show how large of a sample size we need to conclude that switching in the Monty Hall game causes you to win more than 50% of the time. If you were performing an experiment, you would set the number of trials you’ll conduct in advance, run all the trials, and calculate one winning percentage. You can then locate the percentage you obtain on these charts and determine which distribution is more likely to have produced the result you observe. It’s starting to become kind of like a hypothesis test—which we’ll get to later in this post!

Please note that the axes’ scaling changes. I’ve been unable to find a way to keep it consistent for easier comparisons!

### Monty Hall Problem Sample size = 10

This graph shows how the two distributions overlap substantially, which indicates that you’d frequently obtain similar results for expected winning percentages of 50% and 66%. In fact, the probability of winning 60% of the time is nearly equal for both distributions. It’s too small of a sample to be able to distinguish which one is correct unless you get a very low or very high value.

If you had a winning percentage of 80% or higher, it’s improbable that the 50% process produced it because the grey bar is so short at that percentage. Conversely, it’s implausible that the 67% Monty Hall process would yield a 30% winning percentage because the red bar is so short at that percentage.

### Monty Hall Problem Sample size =25

As the sample sizes increase, the sampling distributions start to narrow. While there is still a significant degree of overlap, we’re starting to see two distinct distributions. However, either distribution is roughly equally as likely to produce winning percentages near 60%. You’d need a winning percentage greater than 68% or less than 48% to decide.

### Monty Hall Problem Sample size = 50

The sampling distributions continue to narrow as the sample size increases. However, there is still a moderate amount of overlap. Sample percentages greater than 62% are very unlikely to exist if the expected percentage of switching is truly 50%. Notably, the sample size is becoming large enough to determine that the Monty Hall distribution centered on 66% is more likely to have produced a percentage even when it is smaller than the expected winning percent (62% vs 66%). Large sample sizes are great! On the other hand, winning percentages less than 56% are unlikely to be created if switching in the Monty Hall game has an expected winning percentage of 66%. The zone of uncertainty between those two values continues to shrink.

### Monty Hall Problem Sample size = 100

With this large of a sample, there is a small overlap. There’s only a 4.4% chance that a 50% process would have a winning percentage higher than 58% with a sample size of 100 trials while the Monty Hall process centered on 66% only has a 4.3% chance of having a winning percentage less than 58%.

### Monty Hall Problem Sample size = 400

Finally, with a huge sample size of 400 trials, you can clearly see the separate distributions. It’s virtually impossible for a coin flip 50/50 to have a wining percentage greater than 55% with a sample size of 400. Meanwhile, it’s almost impossible for the Monty Hall distribution to have a winning percentage lower than 62%. Neither distribution produces many outcomes in this no-man’s land.

## Power and Sample Size Analysis for the Monty Hall Problem

Ok, we’ve had our fun playing with the sampling distributions. Now, let’s get ready to perform a hypothesis test to answer this question. Hypothesis tests use sample data to draw conclusions about a population.

We’re performing an experiment, and our strategy will always be to switch. We’ll use a one-sample proportion test to determine whether we can reject the notion that the Monty Hall game follows the distribution that centers on 50%.

Before we perform this experiment, let’s do a power and sample size analysis to determine a sample size that will give us sufficient power to detect this difference. I’ll estimate the sample size necessary to produce 80%, 90%, and 95% power.

I’ve told the software that I want to perform a One Proportion test and entered estimates for our expected proportion (0.67) and the comparison proportion (0.5). If this difference exists in the population, statistical power indicates the probability that our experiment will detect it. Power analyses are beneficial because they help you collect a large enough sample to detect a difference if it exists, but stops you from collecting an overly large sample because that can cost time and money.

The power and sample size results are below.

The output indicates that to obtain statistical power of 0.8, 0.9, and 0.95, we’d need sample sizes of 66, 87, and 107, respectively.

Statistical power of 80% is a standard benchmark. However, because obtaining a larger sample doesn’t cost more money, just a bit more time, I’m looking into higher levels of power. In the end, I’ll go with an even 100 samples, which produces a power of 93.7% (not shown).

**Related posts**: Statistical Hypothesis Testing Overview and Estimating a Good Sample Size for Your Study using Power Analysis

## Performing the Monty Hall Experiment at Home

My daughter and I used three playing cards to play the Monty Hall game at home. One card represented the prize door, and the other two were non-prize doors. My daughter was Monty, and I was the contestant. We played it 100 times.

For each trial, she’d randomize the three cards and then place them face down in a row while noting the location of the prize. I’d pick my card, and she turned over one of the other two cards while taking care not to reveal the prize. I’d always switch to the other card and record the results, which you can download in this CSV data file: MontyHallExperiment.

In our experiment, I won 64 times out of 100. I performed the One Proportion test by telling the software to compare the sample results (0.64) to the comparison proportion of 0.5.

The hypotheses for this test are the following:

- Null: The winning percentage for always switching equals 50%.
- Alternative: The winning percentage for switching does not equal 50%.

If we obtain a p-value that is less than our significance level (0.05), we can reject the null hypothesis and conclude that the expected winning percentage for switching in the Monty Hall game does not equal 50%.

The low p-value indicates that we can reject the null hypothesis and conclude that the expected winning percentage for switching in the Monty Hall Problem is not 50%. That’s consistent with what we know about the probabilities in the underlying game.

In closing, I do not doubt that the expected winning percentage is genuinely 66%. You can prove it using probabilities and logic. I think most of us accept this truth even though it’s admittedly a head scratcher at first! This problem was a useful way to illustrate various principles involved in hypothesis testing and a great way to showcase the proportions hypothesis test.

If you still don’t accept it, grab a friend, three playing cards, and play the game 100 times!

Jim says

I like this explanation better: you have doors A, B, &C each door has a 1/3 chance of having the prize – you choose A. If someone then offered you the opportunity to switch to the other 2 doors B + C, that would increase your chances to 2/3. Just because the person opens door C to reveal nothing does not change the initial odds of 2/3 by choosing B+C. This way, the opening of the door does not change the universe, the odds, or the math, and for me at least is far easier to wrap my head around. It’s not that opening the door adds information, it’s that it doesn’t add information. The odds never changed.

Jim Frost says

That is a nice explanation. Thanks for sharing!

Daryl says

I wouldn’t say it’s a specific scenario, if you take your table from your initial post, with the 9 different possible combinations, the average win rate is 50%, so logically the average win rate using tests over enough iterations to even out deviation, will be 50%, regardless of switching or staying, which again leads to a 50% chance of winning before the game starts.

If you run thousands of tests through a simulation where you’re able to specify different percentages of switch vs stay decisions, and the overall results still show a 50% overall win rate regardless of those percentages, it means switching makes no difference, and there is a flaw in the 33% vs 66% model, which I pointed out earlier.

Jim Frost says

Hi Daryl, unfortunately that’s incorrect and it’s easy to show.

Let start with a balanced scenario where we have 60 total participants. 30 stay with their choice and 30 switch.

30 stay: We’d expect 30 * 1/3 = 10 winners

30 switch: We’d expect 30 * 2/3 = 20 winners

In this scenario, we have 10 + 20 = 30 winners out of 60 participants. Average win rate over both choices is 30/60 =

50%.Now, let’s say that the decisions are unbalanced. People have read about the Monty Hall problem and more decide to switch. We have 60 participants where 15 stay and 45 switch.

15 Stay: We’d expect 15 * 1/3 = 5 winners.

45 Switch: We’d expect 45 * 2/3 = 30 winners.

In this unbalanced scenario, we have 5 + 30 = 35 winners out of 60. The average win rate over both choices is 35/60 =

58.3%. This makes sense when you think about it because more people are making the choice that produces more wins. Of course the overall win rate will go up.As you can see, the percentages making each decision do make a difference in the final overall win rate. The more unbalanced it is towards one decision or the other, stay or switch, the closer the overall win rate approaches 1/3 or 2/3, respectively.

Of course, as I said before, you want to know the average win rate (probability) for each decision rather than an overall average. That way you can make the correct decision!

Daryl says

Hi Jim, I have to disagree with that (“At no point in this game is there a 50% chance of winning”), the average between 66% and 33% is 50%, which means that 50% of contestants win, which means your chance of winning from the outset is 50%.

Jim Frost says

Hi Daryl,

I guess if you had even numbers of people choosing to stay and switch, then you could say the overall average of actual results across both choices is 50%. However, we have no way of knowing that equal numbers choose both options. If more people pick one over the other, then that tips the average in that direction. For instance, if more people choose to switch, then the average will be closer to 67%

But when playing the game, you don’t really want to know what the overall average outcome for both choices. You want to know your chances for each option so you know what to choose. If you choose to stay it’s 33% but if you switch it’s 67%.

So, I guess I see what you’re getting at. But it’s only true in a very specific case that we don’t know is valid. And even if it is valid, it doesn’t help you decide!

Daryl says

Hi Jim, thanks for your reply. I think you’re misunderstanding what I’m trying to put across, I’m saying 50% + 50% = 100%. The point I’m making is that from the outset, before any decision is even made, the contestant has a 50% chance of winning, but a 33% chance of choosing the door with the car behind it, and is where everything goes wrong. That chance remains 50% after one incorrect door is opened (taken away). If the door being opened was random (i.e. Monty doesn’t know which door contained the car), the chance of winning would be 33% from the outset. It’s basically taking what you’re saying about Monty’s knowledge affecting the second decision (to switch or not), and applying it from the outset.

Jim Frost says

Hi Daryl,

I got your point. I’m saying that it is incorrect. Your initial door choice has a 1/3 chance of being correct and therefore you have a 1/3 chance of winning if you stick with it. 2/3 chance of winning if you switch. At no point in this game is there a 50% chance of winning. Your chance of winning is always 1/3 or 2/3 depending on your choice. Again, just because you end up with two options doesn’t mean it’s 50/50.

Daryl says

This is exactly where the models fail. Knowing that one incorrect option will always be removed, gives the contestant to 1/2 chance of winning from the outset, not 1/3, which doesn’t change after the first door is opened. The models assume a 1/3 chance, see a 50% outright win rate, give the first option 33% chance, and therefore give switching doors a 66% chance. If the host had the option of opening the door containing the car first, then odds of winning would actually be 1/3. There is always only one of two outcomes, as the third option will always be removed. If you take your table with all likely scenarios in your original post on this, and remove one of the losing options for each scenario based on the fact that one losing door will always be removed, you’ll see the correct results.

So from the outset, you have a 1/3 chance of selecting the correct door, but a 1/2 chance of winning, as one will always be removed. Switching your decision will still give you 1/2 odds. This is provable using a model as well.

(I’m aware that I’m a bit late to the party)

Jim Frost says

Hi Daryl,

You have 1/3 chance of winning if you stay with your original choice. 2/3 if you switch. The probabilities must add to 100%. You can’t have a 50% chance of winning with one door and 66% with the other because that’s 116%!

Think of it this way. There are two groups of doors in play. Your group of one. And Monty’s group. At the outset, your original choice has 1/3 chance of being correct because it’s one of three cards and randomly selected. Monty’s group has 2/3 chance of containing the prize because it has two doors that are determined randomly by your choice.

Monty removes one door from play by opening. That door has zero chance of containing the prize because he’s using his knowledge and choosing it non-randomly. So, when he removes it from play by opening it, you subtract the 0 from the probability for his group. So, his remaining door retains the full 2/3 probability. You don’t change the number of doors in your group, so that doesn’t change either. Plus, both groups must add to 100%.

Hence, your original door has a 1/3 chance of winning and Monty’s door has 2/3 chance.

Just because there are two options doesn’t mean that either one is 50/50. I show how that works through the simulation in this post.

Sunny Chen says

Hi Mr. Frost,

I was reading your post about Monty Hall Problem and it helps a lot. I find Monty Hall Probelm is similar with the problem I have, but maybe not.

So the scenario is, there are 2000 lots and 200 prizes, people form into a line and pick a random lot one by one. The content of the lot will be announce right after it is drawed. I, as a participant, can change the postion I want in the line whenever I want, and I am the only one allow to do so.

I know prior probability of winning would be the same(10%) for each position, and posterior probability would definetly change, base on the result of previous person, the winning rate would also change.

But my question is:

Is there a certain position in the line that have a higher chance to have higher win rate greater than 1/10?

My argument is, I know in any condition, the winning rate of first person is 1/10 and the losing rate is 9/10. So it is more possible that after first person picked the lot, the winning rate of second person will increase.

Or that is actually not the case? Help me please, my brain hurt so much.

Sincerely,

Sunny

Joyce says

Reading these two articles of the Monty Hall Problem was great fun. Thank you Jim for your great teaching. My thoughts swung between all the probabilities. And now I believe that contestants choose a door with a prize for 33% of the time, and without a prize for 66% of the time. Monty always opens a door without a prize, which does not affect the probability of winning the prize. So, always switching the door will increase the chance to win the prize because contestants choose a wrong door for 66% of the time, rather than 50%.

Kenny says

Hi, I wonder how the probability will look like if the Monty Hall shows the door at random (but is empty upon opening)?

Rob says

Hi, My question is in Monty Hall problem the probabilities are 33.33% each initially but if we switch the chances are double that is 66.66%.

Suppose, I have been looking lets make a deal show and I predicted that Door A has 0.6 chance, a 0.1 chance that it will be in Box B, and a 0.3 chance that it will be in Box C. What will be my best strategy to tackle Monty Hall problem?

Jim Frost says

Hi Rob,

The reason you have a 1/3 chance initially is because you randomly choose one door out of three. The way the game is set up, switching doors switches the outcome. So, if you were going to win by not switching, you’d lose by switching. And vice versa. Consequently, because you have a 1/3 chance of winning by sticking with your original choice, that means you have a 1/3 chance of losing by switching. Therefore, if you switch, you have a 1 – 1/3 = 2/3 chance of winning by switching.

For your scenario, I’ll assume that you know based on insider knowledge that these probabilities are correct (just like we know with the Monty Hall problem). That your numbers are correct probabilities and not guesses. I’ll further assume that your game is played the same way as the regular Monty Hall problem and that switching doors will always flip the outcome. If any of these assumptions are incorrect, it changes the outcomes.

Using that logic, if your original choice is Door A, which has a 0.6 chance of winning, then you have a 0.4 chance of winning by switching. However if you choose Box B, you have a .1 chance of winning by staying but a whopping 0.9 chance of winning by switching. For Box C, you have a 0.3 chance of winning by staying and a 0.7 chance of winning by switching.

Consequently, I’d choose Box B and then switch, which gives me a 90% chance of winning. The reason this works out the best is because there’s a low chance (0.1) of it being in Box B and a high 90% chance that the prize is in either Box A or Box C–but you don’t know which one of the two. However, Monty helps you by opening the box (A or C) which does not have the prize. Consequently, there’s 90% the prize will be in the one other unopened door that you can switch to!

Great question!

Juha says

Thanks for the many entertaining articles, Jim! Wouldn’t it be easier and quicker to proof this with a much smaller sample size if instead of three cards (or doors or boxes…) we would have 10 cards or why not all 52 cards from the deck and Monty always leaves only two cards on the table to either stay or switch? Regards, Juha

Chuck says

Jim, this is a fantastic post and a great follow-up to your previous article. I’ve been interested in this counterintuitive example since reading Charles Wheelan’s “Naked Statistics”, where he spends some time discussing this.

One thing I’m curious about, is if there’s any data on how often contestants actually take advantage of this ability to double their chance at getting the correct door. How often do contestants actually change their choice? Is there anything to suggest that, once we make a decision while under pressure, we’re more likely to stick with it rather than change it? Are there differences between men and women contestants in the rate at which they change their decision/not change their decision?

Keep up the great work with these great thought experiments!

Jim Frost says

Hi Chuck,

Thanks so much for the kind comment. I’m glad you enjoyed the post!

I believe that it is commonly assumed that most contestants will stay. As you mention, they don’t see any reason to switch. However, the only actual data that I’m aware of is on the Mythbusters episode. They not only compared the two strategies of always switching versus always staying, but in a separate experiment, they brought in people to observe which choice they’d make. It’s been awhile since I’ve seen that episode, but if I recall correctly, all or nearly all of the “contestants” stayed with their original choice.

When I was a young child, I saw a few episodes of the show. Whenever Monty offered the opportunity to switch, I always thought he had ulterior motives! That he was trying to get you to change your choice so he wouldn’t have to give out the prize. Little did I know that he was trying to help you!

John Vokey says

Your description of the problem leaves out a crucial feature of the original: Monty Hall does not show a door at random; he always exposes a door without a prize. That conditional is crucial. You assume that conditional in your simulations.

Jim Frost says

Hi John,

Yes, you’re right, that is a crucial feature. I do mention it several times in this post. Both in the very first paragraph and at the end when I explain how we performed the experiment. However, I cover the importance of that fact in MUCH greater detail in my other post about the Monty Hall Problem. That post explains the why/how it works out that way, and that condition plays a central role. This post was meant to cover the problem from a different angle.

Tatiana says

This was a very fun read Jim. Thanks so much for making statistics so cool!

Jim Frost says

Hi Tatiana, thanks! Your kind words make my day!