Difference between revisions of "User:DVO"

Line 1: Line 1:
 +
== For Prof. Welch ==
 +
 +
(This is also for anyone else who see this and happens to know the answer.)
 +
 +
Let's consider the continuous model for the Kalman filter learning tool's "sloshing" case, where the water is sloshing but not filling.  Let <math>L(t)</math> be the height of the water at time t.  Then
 +
 +
<math>L(t) = c + k_s \sin(\omega t)</math> for some constants <math>c</math> and <math>k_s</math>, perhaps <math>c=.3</math> and <math>k_s=.05</math>.
 +
 +
In the continuous model, the "state vector" at time <math>t</math> is <math>x(t) = 
 +
  \begin{pmatrix}
 +
L(t)\\
 +
k_s \end{pmatrix}</math>.
 +
 +
<math>x'(t) = \begin{pmatrix}
 +
0& \omega \cos(\omega t)\\
 +
0&0 \end{pmatrix} x(t) + w(t)</math>      (Eqn.1)
 +
 +
where <math>w(t)</math> is the (random) "process error."
 +
 +
We need to discretize this continuous system of ODEs.
 +
 +
For any <math>t</math> and <math>t_0</math>, <math>\Phi(t,t_0) = \begin{pmatrix}
 +
1&\sin(\omega t) - \sin(\omega t_0) \\
 +
0&1\end{pmatrix}</math>
 +
 +
is a fundamental matrix for <math>x'(t)=\begin{pmatrix}
 +
0& \omega \cos(\omega t)\\
 +
0&0 \end{pmatrix} x(t)</math>, and <math>\Phi(t_0,t_0)=\begin{pmatrix}
 +
1&0\\
 +
0&1\end{pmatrix}</math> .  Any solution <math>x(t)</math> of Eqn. 1 can be written (by variation of parameters) as
 +
 +
<math>x(t) = \Phi(t,t_0)x(t_0) + \int_{t_0}^t \Phi(t,\tau)w(\tau) d \tau</math>.
 +
 +
Letting <math>t=t_k</math> and <math>t_0 = t_{k-1}</math> , we have that
 +
 +
<math>{x(t_k) = \Phi(t_k,t_{k-1})x(t_{k-1}) + \int_{t_{k-1}}^{t_k} \Phi(t_k,\tau)w(\tau) d \tau}</math>.
 +
 +
Thus the "discrete time state transition matrix" is
 +
 +
<math>A_{k-1} = \Phi(t_k, t_{k-1}) =
 +
 +
\begin{pmatrix}
 +
1&\sin(\omega t_k) - \sin(\omega t_{k-1}) \\
 +
0&1\end{pmatrix}</math> .
 +
 +
However, this disagrees with Equation (6) on page 4 of the document "Team 18: The Kalman Filter Learning Tool, Dynamic and Measurement Models," where it is stated that the discrete time state transition matrix is
 +
 +
<math>A(t) = \begin{pmatrix}
 +
 +
1& \omega \cos(\omega t)\\
 +
0&1 \end{pmatrix}</math> .
 +
 +
Is this a mistake in the "Team 18" document, or am I making some mistake?
 +
 +
This ends my question for Prof. Welch.  Thanks.
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 
== Personal info ==
 
== Personal info ==
  

Revision as of 14:55, 30 August 2006

For Prof. Welch

(This is also for anyone else who see this and happens to know the answer.)

Let's consider the continuous model for the Kalman filter learning tool's "sloshing" case, where the water is sloshing but not filling. Let $L(t)$ be the height of the water at time t. Then

$L(t) = c + k_s \sin(\omega t)$ for some constants $c$ and $k_s$, perhaps $c=.3$ and $k_s=.05$.

In the continuous model, the "state vector" at time $t$ is $x(t) =      \begin{pmatrix}  L(t)\\  k_s \end{pmatrix}$.

$x'(t) = \begin{pmatrix} 0& \omega \cos(\omega t)\\ 0&0 \end{pmatrix} x(t) + w(t)$ (Eqn.1)

where $w(t)$ is the (random) "process error."

We need to discretize this continuous system of ODEs.

For any $t$ and $t_0$, $\Phi(t,t_0) = \begin{pmatrix} 1&\sin(\omega t) - \sin(\omega t_0) \\ 0&1\end{pmatrix}$

is a fundamental matrix for $x'(t)=\begin{pmatrix} 0& \omega \cos(\omega t)\\ 0&0 \end{pmatrix} x(t)$, and $\Phi(t_0,t_0)=\begin{pmatrix} 1&0\\ 0&1\end{pmatrix}$ . Any solution $x(t)$ of Eqn. 1 can be written (by variation of parameters) as

$x(t) = \Phi(t,t_0)x(t_0) + \int_{t_0}^t \Phi(t,\tau)w(\tau) d \tau$.

Letting $t=t_k$ and $t_0 = t_{k-1}$ , we have that

${x(t_k) = \Phi(t_k,t_{k-1})x(t_{k-1}) + \int_{t_{k-1}}^{t_k} \Phi(t_k,\tau)w(\tau) d \tau}$.

Thus the "discrete time state transition matrix" is

$A_{k-1} = \Phi(t_k, t_{k-1}) =

\begin{pmatrix} 1&\sin(\omega t_k) - \sin(\omega t_{k-1}) \\ 0&1\end{pmatrix}$ (Error compiling LaTeX. Unknown error_msg) .

However, this disagrees with Equation (6) on page 4 of the document "Team 18: The Kalman Filter Learning Tool, Dynamic and Measurement Models," where it is stated that the discrete time state transition matrix is

$A(t) = \begin{pmatrix}  1& \omega \cos(\omega t)\\ 0&1 \end{pmatrix}$ .

Is this a mistake in the "Team 18" document, or am I making some mistake?

This ends my question for Prof. Welch. Thanks.






Personal info

Name: Daniel O'Connor

(full name: Daniel Verity O'Connor)

Location: Los Angeles


Contributions


Random Math Problems

The following problems are part of a discussion I'm having with someone I know. Please don't comment about them, and most importantly please don't correct any errors or give any hints about better solutions.


Suppose you have a fair six-sided die and you're going to roll the die again and again indefinitely. What's the expected number of rolls until a $1$ comes up on the die?


The probability that it will take one roll is $\frac{1}{6}$.

The probability that it will take two rolls is $\left(\frac56 \right)\left(\frac16 \right)$.

The probability that it will take three rolls is $\left(\frac56 \right)^2 \left(\frac16 \right)$.

The probability that it will take four rolls is $\left(\frac56 \right)^3 \left(\frac16 \right)$.

And so on.

So, the expected number of rolls that it will take to get a $1$ is:

$1\cdot \frac{1}{6} + 2\cdot \left(\frac56 \right)\left(\frac16 \right) + 3\cdot \left(\frac56 \right)^2 \left(\frac16 \right) + 4 \cdot \left(\frac56 \right)^3 \left(\frac16 \right) + \cdots$.

What's the sum of this infinite series? It looks kind of like an infinite geometric series, but not exactly. Factoring out a $\frac16$ makes it look a bit more like an infinite geometric series:

$\left(\frac16 \right) \left(1 + 2\cdot \left(\frac56 \right) + 3\cdot \left(\frac56 \right)^2 + 4 \cdot \left(\frac56 \right)^3 + \cdots \right)$

This is similar to a geometric series, which we know how to sum. But we have those annoying factors of $2$, $3$, $4$, etc. to worry about. Maybe we could play around with the formula for a geometric series to get a formula for this series. The formula for a geometric series is:

$1 + r + r^2 + r^3 + r^4 + \cdots = \frac{1}{1-r}$.

Differentiating both sides of this with respect to $r$, we find:

$1 + 2r + 3r^2 + 4r^3 + \cdots = -(1-r)^{-2}(-1) = \frac{1}{(1-r)^2}$.

So, we find that

$\left(\frac16 \right) \left(1 + 2\cdot \left(\frac56 \right) + 3\cdot \left(\frac56 \right)^2 + 4 \cdot \left(\frac56 \right)^3 + \cdots \right) = \frac16 \frac{1}{(1-\frac56)^2} = \frac16 (36) = 6$.

Which seems like a very reasonable answer, given that the die has six sides.



What's the expected number of rolls it will take in order to get all six numbers at least once?



As a first step, let's find out the expected number of rolls it will take in order to get both $1$ and $2$ at least once.

Let $n \ge 2$ be an integer. What's the probability that it will take $n$ rolls to get a $1$ and a $2$?

Consider two different events: Event $OneFirst$ is the event that the $n^{th}$ roll is a $2$, and the first $n-1$ rolls give at least one $1$ and also no $2$'s. Event $TwoFirst$ is the event that the $n^{th}$ roll is a $1$, and the first $n-1$ rolls give at least one $2$ and also no $1$'s. The probability that it will take $n$ rolls to get a $1$ and a $2$ is $P(OneFirst)+ P(TwoFirst)$.

What is $P(OneFirst)$?

Let $A$ be the event that the $n^{th}$ roll is a $2$. Let $B$ be the event that the first $n-1$ rolls give at least one $1$. Let $C$ be the event that the first $n-1$ rolls give no $2$'s.

$P(OneFirst)=P(A \cap B \cap C)$.

Event $A$ is independent of the event $B \cap C$, so:

$P(A \cap (B \cap C)) = P(A)P(B \cap C)$.

$P(A)$ is $\frac16$, but what is $P(B \cap C)$?

Events $B$ and $C$ aren't independent. But we can say this:

$P(B \cap C)= P(C)P(B|C)= P(C)\left(1-P(B^c|C)\right)$. (Here $B^c$ is the complement of the event $B$. I have used the fact that $P(X|Y)+P(X^c|Y)=1$.)

$P(C) = \left(\frac56 \right)^{n-1}$.

$P(B^c|C) = \frac{P(B^c \cap C)}{P(C)}= \frac{(\frac46)^{n-1}}{(\frac56)^{n-1}}$.

Thus, $P(B \cap C)= \left(\frac56 \right)^{n-1} \left(1- \frac{ (\frac46)^{n-1} }{ (\frac56)^{n-1} } \right)

= \left(\frac56 \right)^{n-1} - \left(\frac46 \right)^{n-1}$ (Error compiling LaTeX. Unknown error_msg).

So $P(A \cap (B \cap C) ) = \left( \frac16 \right) \left( \left(\frac56 \right)^{n-1} - \left(\frac46 \right)^{n-1} \right)$.

We have found $P(OneFirst)$. We're close to finding the probability that it will take n rolls to get both a $1$ and a $2$.

$P(TwoFirst) = P(OneFirst)$. Thus the probability that it will take $n$ rolls to get both a $1$ and a $2$ is $P(OneFirst)+ P(TwoFirst) = \left( \frac13 \right) \left( \left(\frac56 \right)^{n-1} - \left(\frac46 \right)^{n-1} \right)$.

Okay.....

So what's the expected number of rolls it will take to get both a $1$ and a $2$?

It is:

${2 \cdot \left( \frac13 \right) \left( \left(\frac56 \right)^1 - \left(\frac46 \right)^1 \right) + 3 \cdot \left( \frac13 \right) \left( \left(\frac56 \right)^2 - \left(\frac46 \right)^2 \right) + 4 \cdot \left( \frac13 \right) \left( \left(\frac56 \right)^3 - \left(\frac46 \right)^3 \right) + \cdots}$

$= \left( \frac13 \right) \left( 2 \left(\frac56 \right)^1 + 3 \left(\frac56 \right)^2 + 4 \left(\frac56 \right)^3 + \cdots \right) - \left( \frac13 \right) \left( 2\left(\frac46 \right)^1 + 3 \left(\frac46 \right)^2 + 4 \left(\frac46 \right)^3 + \cdots \right)$

$= \left( \frac13 \right) \left( -1 + 1 + 2 \left(\frac56 \right)^1 + 3 \left(\frac56 \right)^2 + 4 \left(\frac56 \right)^3 + \cdots \right) - \left( \frac13 \right) \left(-1 + 1 +  2\left(\frac46 \right)^1 + 3 \left(\frac46 \right)^2 + 4 \left(\frac46 \right)^3 + \cdots \right)$

$= \left( \frac13 \right) \left( -1 + \frac{1}{(1-\frac56)^2} \right) - \left( \frac13 \right)  \left( -1 + \frac{1}{(1-\frac46)^2} \right)$

$= \left( \frac13 \right) ( -1 + 36 + 1 - 9 ) = \left( \frac13 \right) (27) = 9$.

The expected number of rolls is $9$.


Using a similar (but a little more complicated) method, I get that the expected number of rolls until all six numbers appear is $14.7$.

I also found that the expected number of rolls until $1$, $2$, and $3$ appear is $11$.