Wikipedia:Reference desk/Archives/Mathematics/2011 May 28

From Wikipedia, the free encyclopedia
Mathematics desk
< May 27 << Apr | May | Jun >> May 29 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 28[edit]

Inequality[edit]

How do you show that  ?Widener (talk) 02:18, 28 May 2011 (UTC)[reply]

This is the same question you asked on 21 May but translated by (1, 1).--RDBury (talk) 02:40, 28 May 2011 (UTC)[reply]
So it is. (except it's been multiplied by -1 as well). Thanks. Widener (talk) 03:33, 28 May 2011 (UTC)[reply]

Roots vs Eigenvalues[edit]

When you're talking about a differential equation, can you refer to the roots of its characteristic function as the equation's eigenvalues? — Preceding unsigned comment added by 130.102.78.164 (talk) 02:55, 28 May 2011 (UTC)[reply]

Assuming you are talking about a linear differential equation, then yes - see eigenfunction. Gandalf61 (talk) 10:02, 28 May 2011 (UTC)[reply]

DCPO + Every finite subset has a join = CPO[edit]

Hello all,

We define a directed set in a poset P by a subset D such that every pair of elements of D has an upper bound in D. We say P is directed complete if each directed subset of P has a least upper bound in P. I wish to show that a poset is complete iff it is directed complete and has supremums for all its finite subsets. (Sup = least upper bound)

One way is obvious: it is immediately clear from the definition that if a poset is complete then any set, including directed and finite subsets, has a least upper bound. However, I can't quite figure out the other direction: I can't make any headway on it. Since it is certainly the case that there can be non directed-complete subsets, I tried to say 'for each pair of elements there is a least upper bound' since it is a finite subset, then add these least upper bounds to get a directed set: but then you may be lacking upper bounds between the elements you have just added, and then you need to keep on adding more and more elements, so I don't think that line of reasoning is going anywhere. Could anyone help or link me to a proof?

Thanks! Spamalert101 (talk) 08:19, 28 May 2011 (UTC)[reply]

You were on the right track. Starting with an arbitrary set, you want to use the existence of finite joins to turn it into a directed set, as you thought. Instead of just adding the least upper bounds for pairs, add the least upper bounds for every finite subset. Then you can argue that the resulting set is directed (alternatively, do it for pairs and repeat countably many times, then take a union, but you might find that trickier). Then just argue that the least upper bound of the resulting directed set is a least upper bound of the original set.--130.195.2.100 (talk) 00:24, 30 May 2011 (UTC)[reply]

topological graph theory[edit]

In "Topological Graph Theory" by J.L.Gross it was given that" if one cuts a hole in a sphere ,the resulting boundary is homeomorphic to the circle ,just as is the boundary of a mobius band.The surface obtained by attaching a mobius band along its boundary to the hole in the spere ,therby closing off the hole,is called a 'projective plane'."I don't understand this .My idea about projective plane is the set of all lines of R3 through origin.Then how this is possible?also while placing the mobius band on the hole whether it should be placed in the form of mobius band itself or just in the form of a rectangular strip of paper (obained by stretching out the mobius band?If possible please help me by giving its geometrical shapeMathematics2011 (talk) 08:36, 28 May 2011 (UTC)[User:Mathematics2011|Mathematics2011]] (talk) 08:27, 28 May 2011 (UTC)[reply]

If you look at Fundamental polygon you'll see a projective plane as a square where you have to identify sides.
Projective plane
If you don't bother sticking the top and bottom together you get a mobius strip. The top and bottom then form a circle which can be capped with a disc instead of being joined directly - it makes no difference, and a sphere with a hole in it is just like a disc. You can't actually get a projective plane in our 3D space without the sides cutting through each other but this should hopefully show that the construction is meaningful. Dmcq (talk) 10:54, 28 May 2011 (UTC)[reply]
You can see the set of all lines in R3 through the origin is the same as that square by considering that every line cuts a sphere around the origin in 2 opposite points. So it is just like the sphere cut in two except you have to identify opposite points at the edge. Flattening the hemisphere and squaring the edge gives that diagram. Dmcq (talk) 11:03, 28 May 2011 (UTC)[reply]

Stochastic matrix question[edit]

I am aware of the existence of singular stochastic matrices, such as . Whilst I am aware that stochastic matrices are typically used to make future predictions, intuitively, it seems that they could be used to "guess" what past states might have been, by taking negative powers of the matrix corresponding to time steps into the past (of course in conjunction with a probability vector). If the stochastic matrix in question is singular, it is impossible to do this. So, my question: is it correct to interpret singular stochastic matrices as ones that say nothing about what past states might have been? I emphasize "might" as I'm aware that the present state of a stochastic system does not generally allow us to say anything about the future, or past, with certainty.--Leon (talk) 11:20, 28 May 2011 (UTC)[reply]

This is true if the matrix is of rank 1. Then all the rows of the matrix are the same, so any past state will lead to the same distribution for the current state, so the likelihood ratio is 1:1 and there's no Bayesian inference to be made.
But if the matrix is merely singular, take for example . For this process we know that if the current state is 3 then the last state is 3, and if the current state is 2 then the last state is 1 or 2.
I'm sure there's something more general to be said about the ability to infer backwards based on the transition matrix's rank, but I don't know what it is. -- Meni Rosenfeld (talk) 18:04, 28 May 2011 (UTC)[reply]
On the topic of your example, there is no way that the current state could be 1 if we presume that it is possible to infer backwards with such a matrix. So can we say, at least, that if a transition matrix is singular, it won't always be possible to infer backwards with a transition matrix, whilst it if it is invertible, it will? And can this statement be made stronger/more formal?--Leon (talk) 21:46, 28 May 2011 (UTC)[reply]
Well, if , then you're able to infer something backwards whatever the current state. But, given that the current state is 1 or 2, you can't infer anything else knowing which one is it. -- Meni Rosenfeld (talk) 04:35, 29 May 2011 (UTC)[reply]

Another matrix question(s)[edit]

First, if a matrix satisfies , do all its eigenvalues ? For diagonalizable matrices it's easy to demonstrate this, but I'm curious if it's true more generally.

Second, if a matrix is non-diagonalizable, it's plain that the converse does not hold, i.e. if a matrix has eigenvalues it does not follow that . In what circumstances might the equation be satisfied for a non-diagonalizable matrix?--Leon (talk) 13:16, 28 May 2011 (UTC)[reply]

For your first question, if v is an eigenvector then . Since v is nonzero, we deduce . Spamalert101 (talk) 14:29, 28 May 2011 (UTC)[reply]
But if a matrix isn't diagonalizable, it doesn't have a linearly-independent basis of eigenvectors, which makes it less obvious that it should be true for defective matrices. Where am I going wrong?--Leon (talk) 14:44, 28 May 2011 (UTC)[reply]
In what sense are you "going wrong"? Spamalert101 (talk) 14:59, 28 May 2011 (UTC)[reply]
Sorry, I worked it out: every eigenvalue can be associated with some eigenvector, and that the set of eigenvectors need not be linearly-independent is immaterial. Thus, your argument answers the first question.--Leon (talk) 15:36, 28 May 2011 (UTC)[reply]
Any vector can be written as Aw + (w-Aw). The first part is in the eigenspace of 1 and the second is in the eigenspace of 0. So the eigenspaces generate the entire space and the matrix is diagonalizable, therefore the conditions for the second question can't happen.--RDBury (talk) 15:41, 28 May 2011 (UTC)[reply]
I'm not sure I understand. The matrix has two eigenvalues, both 1, but is not diagonalizable, and .--Leon (talk) 16:24, 28 May 2011 (UTC)[reply]

A matrix is diagonalisable iff its minimal equation has no repeated roots. In this case the minimal equation divides X2-X, which itself has no repeated roots, so the minimal equation also has no repeated roots: in fact it must be 1, X, X-1 or X2-X. However, that's not needed for your question. If a matrix satisfies a polynomial f(X), then so does each of its eigenvalues. Sergeant Cribb (talk) 16:49, 28 May 2011 (UTC)[reply]

But the converse doesn't follow. The matrix above does not satisfy the aforementioned polynomial, even though its eigenvalues do. So, can a non-diagonalizable matrix EVER satisfy the polynomial given above?--Leon (talk) 22:27, 28 May 2011 (UTC)[reply]
It's a bit more complicated than that. Diagonal matrices which repeat a diagonal entry are trivially diagonalizable, but they have repeated roots in the minimal polynomial. Repeated roots are allowed, but the degree of each root must equal the dimension of its eigenspace. Staecker (talk) 16:54, 28 May 2011 (UTC)[reply]
Not so. The identity, for example, has minimal polynomial X-1. No repeated roots. Were you thinking of the characteristic polynomial? Reference: [1] Sergeant Cribb (talk) 17:16, 28 May 2011 (UTC)[reply]
Yes I was thinking of the characteristic polynomial. My mistake- as you were, Sergeant. Staecker (talk) 22:21, 28 May 2011 (UTC)[reply]