Prüfen, ob alle Produkte einer Reihe von Matrizen schließlich gleich Null sind


19

Ich interessiere mich für folgendes Problem: Bei gegebenen ganzzahligen Matrizen A1,A2,,Ak entscheidet sich, ob jedes unendliche Produkt dieser Matrizen letztendlich gleich der Nullmatrix ist.

Dieses Mittel genau das, was Sie denken , es tut: Wir haben die Menge von Matrizen sagen {A1,,Ak} die Eigenschaft hat , dass alle seine Produkte schließlich gleich Null , wenn es sich nicht eine unendliche Folge existiert i1,i2,i3 , alles in {1,,k} , so dass für alle .l

Ai1Ai2Ail0
l

Wurde das Problem der Entscheidung, ob jedes Produkt irgendwann gleich Null ist, bereits untersucht? Ist es entscheidbar?

Es scheint, als könnte es mit der Matrixmortalität zusammenhängen, was nicht zu entscheiden ist, aber ich sehe keinen klaren Zusammenhang.


Sie benötigen eine Art Konvergenzeigenschaft für den Satz von Matrizen, um sicherzustellen, dass das unendliche Produkt definiert ist.
András Salamon

Arbeiten Sie in einem endlichen Feld oder in ganzen Zahlen mit unbegrenztem Wachstum? Der Fall k = 1 ist an sich interessant. Mit Ganzzahlen von -100 bis 100 in einer 5x5-Matrix, was ist die höchste Leistung, die Sie erreichen können, bevor sie auf Null gesetzt wird?
Chad Brewbaker

2
@YuvalFilmus - Ich glaube, es ist anders als die Sterblichkeit. Die Dimensionen der Matrizen seien 1 , so dass wir nur Zahlen haben, und es sei A0=0,A1=1 . Sterblich? Ja, weil A0=0 . Jedes Produkt gleich Null? Nein: nicht das Produkt 111 . Wenn andererseits A0=0,A1=0 dann haben Sie eine Folge, die sowohl tödlich als auch jedes Produkt Null ist.
Robinson

1
@ChadBrewbaker - Ich dachte, dass die Einträge der Matrizen nur ganze Zahlen sind. Ich nehme an, dass k=1 unter dem Gesichtspunkt interessant ist: Wie viele Operationen benötigen Sie, um zu überprüfen, ob die Matrix nicht potent ist? Beachten Sie, dass , wenn A nilpotent ist, dann ist es leicht zu sehen , dass An=0 wobei n die Dimension ist A so vermutlich Sie es durch die Quadratur des Matrix lösen könnte logn mal. Ich habe keine Ahnung, ob dies das Beste ist, was Sie tun können.
Robinson

1
Interessanterweise nur in: arxiv.org/abs/1306.0729 . Anstatt zu fragen, ob alle Produkte irgendwann Null sind, fragen sie, ob irgendein Produkt irgendwann positiv ist. Sie zeigen, dass das Problem NP-schwer ist (oder zumindest das, was ich aus der Zusammenfassung entnehme).
Joshua Grochow

Antworten:


17

Ihre Frage ist äquivalent dazu, ob eine nilpotente Algebra erzeugen , die wiederum äquivalent dazu ist, dass jedes der A i nilpotent ist . Daher ist es nicht nur entscheidbar, aber in ~ O ( n 2 ω ) die Zeit , wo ω der Exponent der Matrixmultiplikation ist.A1,,AkAiO~(n2ω)ω

Let A be the associative algebra generated by the Ai: that is, take all linear combinations of the Ai and all finite products thereof. A is called nilpotent if there is some N such that every product of N elements of A is zero.

First, let's see why your condition implies that A is nilpotent. This follows from Konig's Lemma (compactness): every string of length n over the alphabet {1,,k} corresponds to a product of A1,,Ak of length n in an obvious manner. Consider the infinite k-ary rooted tree, whose nodes are naturally in bijective correspondence with strings over {1,,k}. Consider the sub-tree T consisting of those nodes where the corresponding product of the Ai is nonzero. Konig's Lemma says that if T is infinite, then it has an infinite path (exactly violating your property), hence T is finite. We can then take N to be the maximum length of any string in T. So your property implies that A is nilpotent.

The converse is also true, since every element of A is a linear combination of products of the Ai.

Next, note that A is a subalgebra of n×n matrices, and hence is finite-dimensional.

Finally: a finite-dimensional associative algebra in characteristic zero has a basis of nilpotent elements (commuting or not - this is the part that contradicts Yuval's answer) iff it is nilpotent (see, e.g., here).

Thus, to solve your problem, find a basis for the associative algebra generated by the Ai (by the linear-algebra version of breadth-first search) and check that every matrix in the basis is nilpotent. The upper bound O~(n2ω) comes from solving a system of linear equations in n2 variables in the breadth-first search. As dimAn2 the BFS can't last very long, and because these are n×n matrices to check if a matrix A is nilpotent one needs only check that An=0.


2
Do you think there is a way to show this without using any choice principles (even one as weak as König's Lemma, which is equivalent to ACω)?
András Salamon

2
@Andras: I'd say that's a question for Chris Conidis. He's studied questions like that in (computable) reverse mathematics. I'll ask him and point him here.
Joshua Grochow

1
@robinson: 1) Yes, the problem is decidable, in fact in O(n2ω) time where ω is the exponent of matrix multiplication. This comes from solving systems of linear equations over Q when doing the linear algebra breadth-first search. 2) Yes, usual notion of basis when viewing the matrices as vectors in Qn2 (or over R or C).
Joshua Grochow

1
You start with a basis B of A. Now you try to find matrices AA and BB such that AB or BA is not in the span of B. If you succeed, add the product to B and continue. Otherwise, multiplying any matrix in the span of B by any finite product of matrices in A always ends up in the span of B. Since the dimension of the algebra is bounded, the process terminates (in at most n2 steps).
Yuval Filmus

1
@robinson: No. If the algebra is nilpotent, then every element of the algebra is nilpotent. So if you find any non-nilpotent element then the algebra is not nilpotent (and then there are infinite products of your matrices which are never zero).
Joshua Grochow

6

I got a poly-time algorithm for this (rather trivial problem)problem, i.e. for checking whether the joint spectral radius(JSR) is zero or not, in 1995: http://en.wikipedia.org/wiki/Joint_spectral_radius

The story behind the algorithm is roughly as follows: Blondel and Tsitsiklis wrongly stated that for boolean matrices checking whether JSR < 1 is NP-HARD. For any set of integer matrices JSR is ether zero or greater or equal 1. So the counter example to their statement was my algorithm(see the errata to their paper). The main moral: consult the Wikipedia first!


5

The question you are asking is exactly equivalent to deciding whether the joint spectral radius (JSR) of the set of matrices is strictly less than one. Decidability of this question has remained open for quite some time now. (In control theory, this is equivalent to decidability of stability of switched linear systems under arbitrary switching.)

The following variant of your question is known to be undecidable: Given a finite set of square matrices, decide whether all products remain bounded; see here.

The undecidability of the above remains valid even if you have only 2 matrices of size 47x47: see here.

In the JSR language, the question of testing "is JSR 1?" is undecidable (see references above), but decidability of testing "is JSR <1?" is open. The latter question is related to the so-called "rational finiteness conjecture": If the rational finiteness conjecture is true, then the question you are asking is decidable.

Finally, unless P=NP, the JSR is not approximable in polynomial time (in the precise sense defined in this paper).

As a result, one of the answers above which claims an efficient algorithm must be false.

On the positive side, there are several algorithms (e.g. based on semidefinite programming) for approximating the JSR. The different algorithms come with different performance guarantees. See e.g. the following (shamelessly by myself and my colleagues - but see also references therein).

In several special cases, the question you are asking is polynomial time decidable. For example, when the matrices are symmetric, or rank one, or if they commute.

Finally, a great book on the subject is the following.


Please read the formal statement of the question I asked - it is not equivalent to deciding whether the JSR is strictly less than one. You are, perhaps, misled by the title of the question. In short, I'm asking about every product equaling zero in finite time, rather than in an asymptotic sense.
robinson

2
Then the question you are asking is much simpler. The following are equivalent: (i) The condition you define (ii) All finite products are nilpotent (iii) The JSR = 0 (iv) All products of length n are zero (n is the dimension, this is independent of the number of matrices k) . The last condition obviously implies decidability, and if fact you can check the condition in polynomial time. See Section 2.3.1 of the book by Jungers linked at the end of my post. My apologies for thinking that you meant the asymptotic version. (I was misled by the phrase "all products eventually equal zero".)
Amir Ali Ahmadi

In which case, @AmirAliAhmadi doesn't the answer by Joshua Grochow cover it ?
Suresh Venkat

2
It seems to me that it does, with a different algorithm than what I have in mind. (Again, my apologies for thinking that the question was "do all products converge to zero" (i.e., JSR<1?) whose decidability is open.) There are a few differences though with the answer of Joshua. (1) In the equivalence of (i)-(iv) in my previous comment, I don't think Konig's Lemma needs to be used. (2) I don't understand why he is taking linear combinations of the matrices. (3) I copy below a simple alternative algorithm from Section 2.3.1 of the book by Jungers, attributed there to Leonid Gurvits.
Amir Ali Ahmadi

4
[continued from above...] All we need to check is whether all products of length n are zero, but there are kn such matrices. To avoid this, define the following matrices iteratively: X0=I, Xj=ki=1ATiXj1Ai. Then, one has Xn=A product of length nATA. This matrix can be computed by kn matrix multiplications, and is zero if and only if all products of length n are zero.
Amir Ali Ahmadi

0

Edit: This answer is unfortunately incorrect. The error is highlighted below. The argument does work if we are allowed to transpose the matrices.

We start by proving a lemma.

Lemma. Let A be an n×n matrix and let N be the n×n matrix with ones on the secondary diagonal. If ANt and NtA are nilpotent for all t0 then A=0. Correct conclusion: A is upper triangular with zeroes on the diagonal. (The original conclusion is recovered if we are also allowed to multiply by powers of the transpose of N.)

Proof. Suppose for example that n=3, and write

A=adgbehcfi,N=000100010.
We start by calculating AN2:
AN2=000000adg.
This matrix is in triangular form, and so if AN2 is nilpotent then g=0. Continue with AN1:
AN1=000adgbeh=000ad0beh.
Again the matrix is in triangular form, and so if AN1 is nilpotent then d=h=0. Continuing,
AN0=a00be0cfi.
As before, we conclude that a=e=i=0, and so A is upper triangular with zeroes on the diagonal.

If we now consider N2A,N1A,N0A instead, then we conclude that A is lower triangular with zeroes on the diagonal. In fact, we don't get anything new from considering NtA. Therefore A=0.

This is how the proof would go if the original version of the lemma were correct. Now back to the problem at hand. Say that the matrices A1,,Ak satisfy property P if for every infinite sequence i1,[k], we have Ai1Aim=0 for some m. If one of the matrices Ai is not nilpotent then property P clearly fails, so suppose that all the matrices are nilpotent. If all matrices commute then property P clearly holds, so suppose that A1A2A2A1. Change basis so that A1 is in Jordan normal form, and let the corresponding decomposition of the vector space be V1Vt. Let Vi be a vector space on which A1A2A2A1; note that dimVi>1 since 0 commutes with everything. Restricted to Vi, A1=N and A20. Therefore the lemma implies that for some t0, either A2At1 or At1A2 is not nilpotent, and therefore property P clearly fails.

Summarizing, property P holds iff all matrices are nilpotent and all of them commute.


4
The last sentence of your lemma's proof is not correct. N2A nilpotent implies g=0, N1A nilpotent gives d=h=0, and N0A nilpotent gives a=e=i=0. So we only conclude that A is upper triangular with zeros on the diagonal, not that A is diagonal (and hence zero).
Joshua Grochow

Indeed , this answer is not correct. If no one else does, I'll post a counter example to both the lemma and the final assertion when I get home later today.
robinson

5
As usual, it is when something is claimed but not proved that the proof fails. Oh well...
Yuval Filmus

1
So the example I had in mind was:
A0=000100010,A1=000100100
One can verify that every product of sufficient length of these two matrices is zero but they don't commute, and the second one is not zero.
robinson
Durch die Nutzung unserer Website bestätigen Sie, dass Sie unsere Cookie-Richtlinie und Datenschutzrichtlinie gelesen und verstanden haben.
Licensed under cc by-sa 3.0 with attribution required.