# Helen rolls a dice and flips a coin.Calculate the probability that she gets a 4 and a tail.

1/12

Step-by-step explanation:

Probability of 4 = 1/6

Probability tail = 1/2

Multiply them

1/12

Step-by-step explanation:

Probability of 4 = 1/6

Probability tail = 1/2

1/6 x 1/2 = 1/12

## Related Questions

Its due soon help ill give brainliestThe price of a pair of headphones at Store A is 3/4 the price at Store B.
The price at Store A is \$89.25. Find how much you save by buying the headphones at Store A
a. 119
b. 72
c. 29.75

alt answer to give ya brainliest nxsdsvfkbjnvbc

Step-by-step explanation:

cnrgnsdmx

Omg I need help ASAP TOO Twinzies

Which of the following groups have terms that can be used interchangeably?a. critical value, probability, proportion
b. percentage, probability, proportion
c. critical value, percentage, proportion
d. critical value, percentage, probability

percentage, probability, proportion are all very similar terms (not 100% identical but closely related)

5 (x - 3) > 15

Let's solve :

Which could be the first step in simplifying this expression? Check all that apply.(x cubed x Superscript negative 6 Baseline) squared
(x Superscript negative 18 Baseline) squared
(x Superscript negative 3 Baseline) squared
(x Superscript negative 2 Baseline) squared
x Superscript 6 Baseline x Superscript negative 12 Baseline
x Superscript 5 Baseline x Superscript negative 4

The answer is C, D, E

Step-by-step explanation:

ON EDGE 2021

I think that the answers are B and D.

Step-by-step explanation:

(x^-3)^2

x^6 x^-12

I'm so sorry if it's wrong.

Three friends — let’s call them X, Y , and Z — like to play pool (pocket billiards). There are some pool games that involve three players, but these people instead like to play 9-ball, which is a game between two players with the property that a tie cannot occur (there’s always a winner and a loser in any given round). Since it’s not possible for all three of these friends to play at the same time, they use a simple rule to decide who plays in the next round: loser sits down. For example, suppose that, in round 1, X and Y play; then if X wins, Y sits down and the next game is between X and Z. Question: in the long run, which two players square off against each other most often? Least often? So far what I’ve described is completely realistic, but now we need to make a (strong) simplifying assumption. In practice people get tired and/or discouraged, so the probability that (say) X beats Y in any single round is probably not constant in time, but let’s pretend it is, to get a kind of baseline analysis: let 0 < pXY < 1 be the probability that X beats Y in any given game, and define 0 < pXZ < 1 and 0 < pY Z < 1 correspondingly. Consider the stochastic process P that keeps track of

Step-by-step explanation:

(a) If the state space is taken as , the probability of transitioning from one state, say (XY) to another state, say (XZ) will be the same as the probability of Y losing out to X, because if X and Y were playing and Y loses to X, then X and Z will play in the next match. This probability is constant with time, as mentioned in the question. Hence, the probabilities of moving from one state to another are constant over time. Hence, the Markov chain is time-homogeneous.

(b) The state transition matrix will be:

where as stated in part (b) above, the rows of the matrix state the probability of transitioning from one of the states (in that order) at time n and the columns of the matrix state the probability of transitioning to one of the states (in the same order) at time n+1.

Consider the entries in the matrix. For example, if players X and Y are playing at time n (row 1), then X beats Y with probability , then since Y is the loser, he sits out and X plays with Z (column 2) at the next time step. Hence, P(1, 2) = . P(1, 1) = 0 because if X and Y are playing, one of them will be a loser and thus X and Y both together will not play at the next time step. , because if X and Y are playing, and Y beats X, the probability of which is, then Y and Z play each other at the next time step. Similarly,, because if X and Z are playing and X beats Z with probability, then X plays Y at the next time step.

(c) At equilibrium,

i.e., the steady state distribution v of the Markov Chain is such that after applying the transition probabilities (i.e., multiplying by the matrix P), we get back the same steady state distribution v. The Eigenvalues of the matrix P are found below:

The solutions are

These are the eigenvalues of P.

The sum of all the rows of the matrix is equal to 0 when Hence, one of the eigenvectors is :

The other eigenvectors can be found using Gaussian elimination:

Hence, we can write:

, where

and

After n time steps, the distribution of states is:

Let n be very large, say n = 1000 (steady state) and let v0 = [0.333 0.333 0.333] be the initial state. then,

Hence,

Now, it can be verified that