|
|
(46 intermediate revisions by 5 users not shown) |
Line 2: |
Line 2: |
| Let <math>A_1A_2A_3\ldots A_{12}</math> be a dodecagon (<math>12</math>-gon). Three frogs initially sit at <math>A_4,A_8,</math> and <math>A_{12}</math>. At the end of each minute, simultaneously, each of the three frogs jumps to one of the two vertices adjacent to its current position, chosen randomly and independently with both choices being equally likely. All three frogs stop jumping as soon as two frogs arrive at the same vertex at the same time. The expected number of minutes until the frogs stop jumping is <math>\frac mn</math>, where <math>m</math> and <math>n</math> are relatively prime positive integers. Find <math>m+n</math>. | | Let <math>A_1A_2A_3\ldots A_{12}</math> be a dodecagon (<math>12</math>-gon). Three frogs initially sit at <math>A_4,A_8,</math> and <math>A_{12}</math>. At the end of each minute, simultaneously, each of the three frogs jumps to one of the two vertices adjacent to its current position, chosen randomly and independently with both choices being equally likely. All three frogs stop jumping as soon as two frogs arrive at the same vertex at the same time. The expected number of minutes until the frogs stop jumping is <math>\frac mn</math>, where <math>m</math> and <math>n</math> are relatively prime positive integers. Find <math>m+n</math>. |
| | | |
− | ==Solution== | + | ==Solution 1== |
| Define the <b>distance</b> between two frogs as the number of sides between them that do not contain the third frog. | | Define the <b>distance</b> between two frogs as the number of sides between them that do not contain the third frog. |
| | | |
Line 29: |
Line 29: |
| Therefore, the answer is <math>16+3=\boxed{019}.</math> | | Therefore, the answer is <math>16+3=\boxed{019}.</math> |
| | | |
− | ~Ross Gao (Fundamental Logic) | + | ~Ross Gao ~MRENTHUSIASM |
| | | |
− | ~MRENTHUSIASM (Reconstruction)
| + | ==Solution 2 (Markov Chain)== |
| | | |
− | ===Alternate solution===
| + | We can solve the problem by removing <math>1</math> frog, and calculate the expected time for the remaining <math>2</math> frogs. In the original problem, when the movement stops, <math>2</math> of the <math>3</math> frogs meet. Because the <math>3</math> frogs cannot meet at one vertex, the probability that those two specific frogs meet is <math>\tfrac13</math>. If the expected time for the two frog problem is <math>E'</math>, then the expected time for the original problem is <math>\tfrac 13 E'</math>. |
| | | |
− | Without loss of generality, let's have each frog jump one space to their right after their usual jump. Then observe that half of the vertices will be blank due to parity reasons. Then, we can instead consider the frogs on a hexagon instead. Let them start at vertices <math>A_{0}</math>, <math>A_{2}</math>, and <math>A_{4}</math>. Each time, they either stay at the same lilypad, or jump right by one spot. Observe that configurations are still equivalent to their reflections. If some frogs jump in one reflection, we have all frogs jump in the other reflection except for the ones which jumped in the original reflection.
| + | The distance between the two frogs can only be <math>0</math>, <math>2</math>, <math>4</math>, <math>6</math>. We use the distances as the states to draw the following [https://en.wikipedia.org/wiki/Markov_chain Markov Chain]. This Markov Chain is much simpler than that of Solution 1 Supplement in the <b>Remark</b> section. |
| | | |
− | There are only four states on this hexagon. The first state, <math>x</math>, is where the frogs are arranged in an equilateral triangle. The second state, <math>y</math>, is where two frogs are adjacent. The third state, <math>z</math>, is where all three frogs are adjacent. And the fourth state, <math>H</math>, is the halting state. We can then consider how these states change.
| + | [[File:AIME-I-2021-12a.png| 600px |center]] |
| | | |
− | <math>x</math> moves to <math>x</math> with <math>\frac{1}{4}</math> probability, and to <math>y</math> with <math>\frac{3}{4}</math> probability. | + | <cmath>\begin{align*} |
− | | + | E(2) &= 1 + \frac12 \cdot E(2) + \frac14 \cdot E(4)\\ |
− | <math>y</math> moves to <math>y</math> with <math>\frac{1}{2}</math> probability, to <math>x</math> and <math>z</math> with <math>\frac{1}{8}</math> probability each, and to <math>H</math> with <math>\frac{1}{4}</math> probability.
| + | E(4) &= 1 + \frac14 \cdot E(2) + \frac12 \cdot E(4) + \frac14 \cdot E(6)\\ |
− | | + | E(6) &= 1 + \frac12 \cdot E(4) + \frac12 \cdot E(6) |
− | <math>z</math> moves to <math>z</math> and <math>y</math> with <math>\frac{1}{4}</math> probability each, and to <math>H</math> with <math>\frac{1}{2}</math> probability.
| + | \end{align*}</cmath> |
− | | |
− | <math>H</math> always moves to itself.
| |
− | | |
− | Now, let <math>x</math>, <math>y</math>, <math>z</math>, and <math>H</math> be functions of time. We can create a state vector <math>\overrightarrow{S}=\begin{bmatrix}x\\y\\z\\H\end{bmatrix}</math>. Observe that we can then multiply <math>\overrightarrow{S}\left(t+1\right)=A\overrightarrow{S}\left(t\right)</math> for some matrix <math>A</math>. This matrix can be found from our previous examination of how the states move:
| |
− | | |
− | <math>A=\begin{bmatrix}\frac{1}{4} & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2} & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4} & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1\end{bmatrix}</math>
| |
− | | |
− | From the problem, we have <math>\overrightarrow{S}\left(0\right)=\begin{bmatrix}1\\0\\0\\0\end{bmatrix}</math>, and by induction, we have <math>\overrightarrow{S}\left(t\right)=A^{t}\overrightarrow{S}\left(0\right)</math>. The expected stopping time is <math>\sum_{t=1}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)</math>. To find <math>H\left(t\right)</math>, we need to be able to easily raise <math>A</math> to high powers. <math>PDP^{-1}</math> factorization is the tool we need for this job. We need the eigenvalues and eigenvectors for <math>A</math>. By definition, if <math>\overrightarrow{v}</math> is nonzero and is an eigenvector of <math>A</math>, then there exists an eigenvalue <math>\lambda</math> where <math>A\overrightarrow{v}=\lambda\overrightarrow{v}</math>. To subtract matrices from matrices, we insert a copy of the identity matrix:
| |
− | | |
− | <math>A\overrightarrow{v}=\left(\lambda I\right)\overrightarrow{v}</math>
| |
− | | |
− | Then we want values of <math>\lambda</math> where <math>A-\lambda I</math> has a null space with nonzero dimension. This happens when <math>\det\left(A-\lambda I\right)=0</math>. We know:
| |
− | | |
− | <math>A-\lambda I=\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}</math>
| |
− | | |
− | We need the determinant of that. One obvious way to proceed would be to do a cofactor expansion across the last column. However, it's easier to expand across the first column, because one cofactor is lower triangular, meaning that we can take the determinant of that cofactor by multiplying its diagonal entries, and the other cofactor has two zeroes in one of its columns, meaning we only have to evaluate one more determinant. Two of the entries in the first column are still 0, so we can skip evaluating the determinants of those two cofactors.
| |
− | | |
− | <math>\det\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\left(\frac{1}{4}-\lambda\right)\det\begin{bmatrix}\frac{1}{2}-\lambda & \frac{1}{4} & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}-\frac{3}{4}\det\begin{bmatrix}\frac{1}{8} & 0 & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}</math>
| |
− | | |
− | Let's do the easy part first:
| |
− | | |
− | <math>\det\begin{bmatrix}\frac{1}{8} & 0 & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\frac{1}{8}\left(\frac{1}{4}-\lambda\right)\left(1-\lambda\right)</math>
| |
− | | |
− | For the other cofactor, we simply do cofactor expansion again, this time on the last column, meaning that we only have to deal with one 2x2 cofactor.
| |
− | | |
− | <math>\det\begin{bmatrix}\frac{1}{2}-\lambda & \frac{1}{4} & 0\\\frac{1}{8} & \frac{1}{4}-\lambda & 0\\\frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\left(1-\lambda\right)\det\begin{bmatrix}\frac{1}{2}-\lambda & \frac{1}{4}\\\frac{1}{8} & \frac{1}{4}-\lambda\end{bmatrix}</math>
| |
− | | |
− | <math>=\left(1-\lambda\right)\left(\left(\frac{1}{2}-\lambda\right)\left(\frac{1}{4}-\lambda\right)-\frac{1}{8}\cdot\frac{1}{4}\right)</math>
| |
− | | |
− | <math>=\left(1-\lambda\right)\left(\lambda^{2}-\frac{3}{4}\lambda+\frac{3}{32}\right)</math>
| |
− | | |
− | Substituting it all back in, this is what we get.
| |
− | | |
− | <math>\det\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\frac{1}{8}\left(1-\lambda\right)\left(\frac{1}{4}-\lambda\right)^{2}-\frac{3}{4}\left(1-\lambda\right)\left(\lambda^{2}-\frac{3}{4}\lambda+\frac{3}{32}\right)</math>
| |
− | | |
− | After a bit of distribution and cancellation, we get:
| |
− | | |
− | <math>\det\begin{bmatrix}\frac{1}{4}-\lambda & \frac{1}{8} & 0 & 0\\\frac{3}{4} & \frac{1}{2}-\lambda & \frac{1}{4} & 0\\0 & \frac{1}{8} & \frac{1}{4}-\lambda & 0\\0 & \frac{1}{4} & \frac{1}{2} & 1-\lambda\end{bmatrix}=\lambda\left(\frac{1}{4}-\lambda\right)\left(1-\lambda\right)\left(\lambda-\frac{3}{4}\right)</math>
| |
− | | |
− | We have our eigenvalues; they are 0, <math>\frac{1}{4}</math>, <math>\frac{3}{4}</math>, and 1. Now, let's find an eigenvector for each eigenvalue. Remember the equation for eigenvectors and eigenvalues, and that they must not be zero:
| |
− | | |
− | <math>A\overrightarrow{v}=\lambda\overrightarrow{v}</math>
| |
− | | |
− | This time, we rearrange slightly differently.
| |
− | | |
− | <math>\left(A-\lambda I\right)\overrightarrow{v}=\overrightarrow{0}</math>
| |
− | | |
− | We then need to find nonzero vectors <math>\overrightarrow{v}</math> for each value of <math>\lambda</math>; these vectors will be our eigenvectors. You can use any method that you would normally use; it should still work. In this case, I got these eigenvectors, up to a constant factor:
| |
− | | |
− | <math>\lambda=0\to\overrightarrow{v}=\begin{bmatrix}1\\-2\\1\\0\end{bmatrix}</math>
| |
− | | |
− | <math>\lambda=\frac{1}{4}\to\overrightarrow{v}=\begin{bmatrix}1\\0\\-3\\2\end{bmatrix}</math>
| |
− | | |
− | <math>\lambda=\frac{3}{4}\to\overrightarrow{v}=\begin{bmatrix}1\\4\\1\\-6\end{bmatrix}</math>
| |
− | | |
− | <math>\lambda=1\to\overrightarrow{v}=\begin{bmatrix}0\\0\\0\\1\end{bmatrix}</math>
| |
− | | |
− | We can construct <math>P</math> and <math>D</math>. <math>D</math> will be a diagonal matrix whose entries are the eigenvalues. <math>P</math> is a matrix where the nth column contains an eigenvector for the eigenvalue in the nth column of <math>D</math>. Here are <math>P</math> and <math>D</math>:
| |
− | | |
− | <math>P=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}</math>
| |
− | | |
− | <math>D=\begin{bmatrix}0 & 0 & 0 & 0\\0 & \frac{1}{4} & 0 & 0\\0 & 0 &\frac{3}{4} & 0\\0 & 0 & 0 & 1\end{bmatrix}</math>
| |
− | | |
− | Inverting with any method gives <math>P^{-1}=\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math>.
| |
− | | |
− | Therefore, we have:
| |
| | | |
− | <math>A=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \frac{1}{4} & 0 & 0\\0 & 0 &\frac{3}{4} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math> | + | By solving the above system of equations, <math>E(4) = 16</math>. The answer for the original problem is <math>\frac{16}{3}</math>, <math>16 + 3 = \boxed{\textbf{019}}</math> |
| | | |
− | However, this makes finding <math>\overrightarrow{S}\left(t\right)</math> really convenient as now we can raise <math>A</math> to high powers with the <math>PDP^{-1}</math> decomposition:
| + | ~[https://artofproblemsolving.com/wiki/index.php/User:Isabelchen isabelchen] |
| | | |
− | <math>A^{t}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \frac{1}{4} & 0 & 0\\0 & 0 &\frac{3}{4} & 0\\0 & 0 & 0 & 1\end{bmatrix}^{t}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0^{t} & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1^{t}\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math>
| + | ==Remark (Markov Chain)== |
| | | |
− | This relies on two facts. One, that <math>PP^{-1}=P^{-1}P=I</math>, and two, that raising a diagonal matrix to a power can be done by raising its entries to said power.
| + | ===Solution 1 Supplement=== |
| | | |
− | Note, that in our case, we can simplify. First of all, <math>1^{t}=1</math>.
| + | Solution 1 can be represented by the following [https://en.wikipedia.org/wiki/Markov_chain Markov Chain]: |
| | | |
− | <math>A^{t}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0^{t} & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math>
| + | [[File:AIME-I-2021-12b.png| 600px |center]] |
| | | |
− | Earlier, I said that the expected stopping time is <math>\sum_{t=1}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)</math>. This is true, but note that <math>H\left(1\right)=H\left(1-1\right)=H\left(0\right)=0</math>, so that the first term is zero. As a result, we can instead evaluate <math>\sum_{t=2}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)</math>. This lets us only pass positive values of <math>t</math> into <math>H</math>, which lets us make another simplification: <math>0^{t}=0</math> when <math>t>0</math>.
| + | From state <math>(4, 4, 4)</math> to state <math>(4, 4, 4)</math>: the <math>3</math> frogs must jump in the same direction, <math>2 \cdot \frac18 = \frac14</math>. |
| | | |
− | <math>A^{t}=\begin{bmatrix}1 & 1 & 1 & 0\\-2 & 0 & 4 & 0\\1 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{2} & -\frac{1}{6} & \frac{1}{6} & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math> | + | From state <math>(4, 4, 4)</math> to state <math>(2, 4, 6)</math>: <math>2</math> frogs must jump in the same direction, and the other must jump in the opposite direction, <math>\binom32 \cdot 2 \cdot \frac18 = \frac34</math>. |
| | | |
− | This is a very powerful weapon on a diagonal matrix. Multiplying vectors by diagonal matrices simply multiplies each element in the vector by the corresponding entry in the diagonal matrix. Since <math>D</math> has a zero as its first entry, its output vector will always have a zero as its first entry. This means that when multiplying by <math>P</math>, the first column is completely irrelevant, as every multiplication with an entry in the first column is a multiplication by 0. Likewise, we can ignore the first row in <math>P^{-1}</math>, as every multiplication with an entry in the first row is added to the first element of the output, which will then get zeroed out by <math>D</math>.
| + | From state <math>(2, 4, 6)</math> to state <math>(4, 4, 4)</math>: the <math>2</math> frogs with a distance of <math>4</math> in between must jump in the same direction so that they will be further away from the other frog, and the other frog must jump in the opposite direction as those <math>2</math> frogs, <math>\frac18</math>. |
| | | |
− | <math>A^{t}=\begin{bmatrix}0 & 1 & 1 & 0\\0 & 0 & 4 & 0\\0 & -3 & 1 & 0\\0 & 2 & -6 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\0 & \left(\frac{1}{4}\right)^{t} & 0 & 0\\0 & 0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}0 & 0 & 0 & 0\\\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math> | + | From state <math>(2, 4, 6)</math> to state <math>(2, 4, 6)</math>: the <math>3</math> frogs can all jump in the same direction; or the <math>2</math> frogs with a distance of <math>2</math> in between jumps away from each other and the other frog jumps closer to the closest frog; or the <math>2</math> frogs with a distance of <math>6</math> in between jump closer to each other and away from the third frog, the third frog jumps closer to the closest frog; <math>(2 + 1 + 1) \cdot \frac18 = \frac12</math>. |
| | | |
− | Finally, let's just completely drop the irrelevant rows and columns, and create smaller matrices (This actually works):
| + | From state <math>(2, 4, 6)</math> to state <math>(2, 2, 8)</math>: the <math>2</math> frogs with a distance of <math>2</math> in between must jump closer to the other frog and the other frog must jump close to those <math>2</math> frogs, <math>\frac18</math>. |
| | | |
− | <math>A^{t}=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\left(\frac{1}{4}\right)^{t} & 0 & 0\\0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}</math> | + | From state <math>(2, 2, 8)</math> to state <math>(2, 4, 6)</math>: <math>2</math> frogs that have no frogs in between must both jump in the same direction away from the other frog, the other frog must also jump away from those <math>2</math> frogs, <math>2 \cdot \frac18 = \frac14</math>. |
| | | |
− | Since we know that <math>\overrightarrow{S}\left(0\right)=\begin{bmatrix}1\\0\\0\\0\end{bmatrix}</math>, we know that when <math>t>0</math>, <math>\overrightarrow{S}\left(t\right)=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\left(\frac{1}{4}\right)^{t} & 0 & 0\\0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4} & 0 & -\frac{1}{4} & 0\\\frac{1}{4} & \frac{1}{6} & \frac{1}{12} & 0\\1 & 1 & 1 & 1\end{bmatrix}\begin{bmatrix}1\\0\\0\\0\end{bmatrix}</math>. We carry out the multiplication:
| + | From state <math>(2, 2, 8)</math> to state <math>(2, 2, 8)</math>: the <math>3</math> frogs must all jump in the same direction, <math>2 \cdot \frac18 = \frac14</math>. |
| | | |
− | <math>\overrightarrow{S}\left(t\right)=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\left(\frac{1}{4}\right)^{t} & 0 & 0\\0 &\left(\frac{3}{4}\right)^{t} & 0\\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4}\\\frac{1}{4}\\1\end{bmatrix}</math> | + | From state <math>(2, 2, 8)</math> to state <math>(0, x, y)</math>: frogs with a distance of <math>2</math> must jump closer to each other, the other frog can jump in any direction, <math>\frac12 \cdot \frac12 \cdot 2 = \frac12</math>. |
| | | |
− | <math>\overrightarrow{S}\left(t\right)=\begin{bmatrix}1 & 1 & 0\\0 & 4 & 0\\-3 & 1 & 0\\2 & -6 & 1\end{bmatrix}\begin{bmatrix}\frac{1}{4}\cdot\left(\frac{1}{4}\right)^{t}\\\frac{1}{4}\cdot\left(\frac{3}{4}\right)^{t}\\1\end{bmatrix}</math> | + | From state <math>(2, 4, 6)</math> to state <math>(0, x, y)</math>: the frogs with a distance of <math>2</math> must jump closer to each other, the other frog can jump in any direction, <math>\frac12 \cdot \frac12 = \frac14</math>. |
| | | |
− | We only care about <math>H</math>:
| + | Because <math>a + b + c = 12</math>, we can use <math>(a, b)</math> to represent the states which is simpler than using <math>(a, b, c)</math> in Solution 1. |
| | | |
− | <math>H\left(t\right)=\frac{1}{2}\cdot\left(\frac{1}{4}\right)^{t}-\frac{3}{2}\cdot\left(\frac{3}{4}\right)^{t}+1</math>
| + | ===Summary=== |
| | | |
− | Now, substitute into the infinite series.
| + | The two above Markov Chains are [https://en.wikipedia.org/wiki/Absorbing_Markov_chain Absorbing Markov Chain]. The state of <math>2</math> frogs meeting is the absorbing state. This problem asks for the [https://en.wikipedia.org/wiki/Absorbing_Markov_chain#Expected_number_of_steps Expected Number of Steps] before being absorbed in the absorbing state. |
| | | |
− | Define <math>h\left(t\right)=H\left(t\right)-1=\frac{1}{2}\cdot\left(\frac{1}{4}\right)^{t}-\frac{3}{2}\cdot\left(\frac{3}{4}\right)^{t}</math>. Observe that <math>\sum_{t=2}^{\infty}t\left(H\left(t\right)-H\left(t-1\right)\right)=\sum_{t=2}^{\infty}t\left(h\left(t\right)-h\left(t-1\right)\right)</math>, except using <math>h</math> instead of <math>H</math> allows us to commute stuff, since the positive pieces and negative pieces both converge absolutely on their own.
| + | Let <math>p_{ij} = P(X_{n+1} = j | X_n = i)</math>, the probability that state <math>i</math> transits to state <math>j</math> on the next step. |
| | | |
− | <math>\sum_{t=2}^{\infty}t\left(h\left(t\right)-h\left(t-1\right)\right)=2h\left(2\right)-2h\left(1\right)+3h\left(3\right)-3h\left(2\right)+4h\left(4\right)-4h\left(3\right)+...</math> | + | Let <math>e_i</math> be the expected number of steps before being absorbed in the absorbing state when starting from transient state <math>i</math>. |
| | | |
− | <math>=-2h\left(1\right)-h\left(2\right)-h\left(3\right)-h\left(4\right)-...</math> | + | <math>e_i = \sum_{j} [p_{ij} \cdot ( 1 + e_{j})] = \sum_{j} (p_{ij} + p_{ij} \cdot e_{j}) = \sum_{j} p_{ij} + \sum_{j} (p_{ij} \cdot e_{j}) = 1 + \sum_{j} (p_{ij} \cdot e_{j})</math> |
| | | |
− | Note that <math>h\left(1\right)=-1</math>. We plug in the definition for <math>h</math>, and "split" the two copies of <math>h\left(1\right)</math>:
| + | <math>e_i</math> is <math>1</math> plus the sum of the products of <math>p_{ij}</math> and <math>s_j</math> of all the next state <math>j</math>, <math>\sum_{j} p_{ij} = 1</math>. |
| | | |
− | <math>\sum_{t=2}^{\infty}t\left(h\left(t\right)-h\left(t-1\right)\right)=1+\frac{3}{2}\sum_{t=1}^{\infty}\left(\frac{3}{4}\right)^{t}-\frac{1}{2}\sum_{t=1}^{\infty}\left(\frac{1}{4}\right)^{t}</math>
| + | [https://artofproblemsolving.com/wiki/index.php/2014_AMC_12B_Problems/Problem_22 2014 AMC12B Problem 22] and [https://artofproblemsolving.com/wiki/index.php/2017_AMC_12A_Problems/Problem_22 2017 AMC12A Problem 22] are similar problem with simpler states, both problems can be solved by Absorbing Markov Chain. |
| | | |
− | <math>=1+\frac{3}{2}\cdot\frac{3}{4-3}-\frac{1}{2}\cdot\frac{1}{4-1}</math>
| + | ~[https://artofproblemsolving.com/wiki/index.php/User:Isabelchen isabelchen] |
| | | |
− | <math>=\frac{16}{3}</math>
| + | ==Video Solution== |
| + | https://www.youtube.com/watch?v=nt4xwt5Ldtc |
| | | |
− | Therefore, the answer is 19.
| + | ~Math Problem Solving Skills |
| | | |
| ==See Also== | | ==See Also== |
Problem
Let be a dodecagon (-gon). Three frogs initially sit at and . At the end of each minute, simultaneously, each of the three frogs jumps to one of the two vertices adjacent to its current position, chosen randomly and independently with both choices being equally likely. All three frogs stop jumping as soon as two frogs arrive at the same vertex at the same time. The expected number of minutes until the frogs stop jumping is , where and are relatively prime positive integers. Find .
Solution 1
Define the distance between two frogs as the number of sides between them that do not contain the third frog.
Let denote the expected number of minutes until the frogs stop jumping, such that the distances between the frogs are and (in either clockwise or counterclockwise order). Without the loss of generality, assume that
We wish to find Note that:
- At any moment before the frogs stop jumping, the only possibilities for are and
- does not depend on the actual positions of the frogs, but depends on the distances between the frogs.
- At the end of each minute, each frog has outcomes. So, there are outcomes in total.
We have the following system of equations:
Rearranging and simplifying each equation, we get
Substituting and into we obtain from which Substituting this into gives
Therefore, the answer is
~Ross Gao ~MRENTHUSIASM
Solution 2 (Markov Chain)
We can solve the problem by removing frog, and calculate the expected time for the remaining frogs. In the original problem, when the movement stops, of the frogs meet. Because the frogs cannot meet at one vertex, the probability that those two specific frogs meet is . If the expected time for the two frog problem is , then the expected time for the original problem is .
The distance between the two frogs can only be , , , . We use the distances as the states to draw the following Markov Chain. This Markov Chain is much simpler than that of Solution 1 Supplement in the Remark section.
By solving the above system of equations, . The answer for the original problem is ,
~isabelchen
Solution 1 Supplement
Solution 1 can be represented by the following Markov Chain:
From state to state : the frogs must jump in the same direction, .
From state to state : frogs must jump in the same direction, and the other must jump in the opposite direction, .
From state to state : the frogs with a distance of in between must jump in the same direction so that they will be further away from the other frog, and the other frog must jump in the opposite direction as those frogs, .
From state to state : the frogs can all jump in the same direction; or the frogs with a distance of in between jumps away from each other and the other frog jumps closer to the closest frog; or the frogs with a distance of in between jump closer to each other and away from the third frog, the third frog jumps closer to the closest frog; .
From state to state : the frogs with a distance of in between must jump closer to the other frog and the other frog must jump close to those frogs, .
From state to state : frogs that have no frogs in between must both jump in the same direction away from the other frog, the other frog must also jump away from those frogs, .
From state to state : the frogs must all jump in the same direction, .
From state to state : frogs with a distance of must jump closer to each other, the other frog can jump in any direction, .
From state to state : the frogs with a distance of must jump closer to each other, the other frog can jump in any direction, .
Because , we can use to represent the states which is simpler than using in Solution 1.
Summary
The two above Markov Chains are Absorbing Markov Chain. The state of frogs meeting is the absorbing state. This problem asks for the Expected Number of Steps before being absorbed in the absorbing state.
Let , the probability that state transits to state on the next step.
Let be the expected number of steps before being absorbed in the absorbing state when starting from transient state .
is plus the sum of the products of and of all the next state , .
2014 AMC12B Problem 22 and 2017 AMC12A Problem 22 are similar problem with simpler states, both problems can be solved by Absorbing Markov Chain.
~isabelchen
Video Solution
https://www.youtube.com/watch?v=nt4xwt5Ldtc
~Math Problem Solving Skills
See Also
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.