$L^2$ norm of a matrix: Is this statement true? The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Is a matrix that is symmetric and has all positive eigenvalues always positive definite?How to calculate the square root of matrix $A+B$ perturbatively?Matrix with non-negative eigenvaluesIs spectral radius = operator norm for a positive valued matrix?Name of technique for determining the number of eigenvalues larger than some limitComputation of 2-norm using Eigenvalues vs. MatlabLimit of eigenvalues of a matrix sequence.Shifting eigenvalues via skew-symmetric product3x3 integer matrixProve this matrix inequality
"... to apply for a visa" or "... and applied for a visa"?
Deal with toxic manager when you can't quit
For what reasons would an animal species NOT cross a *horizontal* land bridge?
Student Loan from years ago pops up and is taking my salary
How to read αἱμύλιος or when to aspirate
Word to describe a time interval
Do working physicists consider Newtonian mechanics to be "falsified"?
Word for: a synonym with a positive connotation?
Do warforged have souls?
Is every episode of "Where are my Pants?" identical?
Does Parliament need to approve the new Brexit delay to 31 October 2019?
Loose spokes after only a few rides
Did the new image of black hole confirm the general theory of relativity?
How do you keep chess fun when your opponent constantly beats you?
Example of compact Riemannian manifold with only one geodesic.
My body leaves; my core can stay
Identify 80s or 90s comics with ripped creatures (not dwarves)
Single author papers against my advisor's will?
Can withdrawing asylum be illegal?
US Healthcare consultation for visitors
How to type a long/em dash `—`
Do ℕ, mathbbN, BbbN, symbbN effectively differ, and is there a "canonical" specification of the naturals?
Homework question about an engine pulling a train
60's-70's movie: home appliances revolting against the owners
$L^2$ norm of a matrix: Is this statement true?
The 2019 Stack Overflow Developer Survey Results Are In
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Is a matrix that is symmetric and has all positive eigenvalues always positive definite?How to calculate the square root of matrix $A+B$ perturbatively?Matrix with non-negative eigenvaluesIs spectral radius = operator norm for a positive valued matrix?Name of technique for determining the number of eigenvalues larger than some limitComputation of 2-norm using Eigenvalues vs. MatlabLimit of eigenvalues of a matrix sequence.Shifting eigenvalues via skew-symmetric product3x3 integer matrixProve this matrix inequality
$begingroup$
I am following Nocedal and Wright's Numerical Optimization book for self study. In the Appendix section of the book, the following matrix norms are defined:
They defined the $l2$ norm of the matrix $A$ as the largest eigenvalue of $(A^TA)^1/2$.
But I have also seen the following definition:
$||A||_2 =max_i:n sqrtlambda_i$ where $lambda_i$ is the i. eigenvalue of the matrix $A^TA$.
(source: http://www.maths.lth.se/na/courses/FMN081/FMN081-06/lecture6.pdf)
I am not sure how these two definitions are equal. $A^TA$ is a symmetric positive definite matrix, hence it has positive eigenvalues. Assume that $lambda_i$ is its largest eigenvalue. $A^TA$ has a unique positive definite square root with the eigenvalues $sqrtlambda_i$. Considering only this PD square root matrix, Nocedal's definition is correct. But there can be other square root matrices of $A^TA$ as well, for which different eigenvalues are the largest. And if $A^TA$ has repeating eigenvalues, it will have infinitely many square roots. Hence I think there is an ambiguity in the Nocedal's definition. Am I missing something here? How can be the book's definition correct?
linear-algebra matrices norm matrix-norms spectral-norm
$endgroup$
add a comment |
$begingroup$
I am following Nocedal and Wright's Numerical Optimization book for self study. In the Appendix section of the book, the following matrix norms are defined:
They defined the $l2$ norm of the matrix $A$ as the largest eigenvalue of $(A^TA)^1/2$.
But I have also seen the following definition:
$||A||_2 =max_i:n sqrtlambda_i$ where $lambda_i$ is the i. eigenvalue of the matrix $A^TA$.
(source: http://www.maths.lth.se/na/courses/FMN081/FMN081-06/lecture6.pdf)
I am not sure how these two definitions are equal. $A^TA$ is a symmetric positive definite matrix, hence it has positive eigenvalues. Assume that $lambda_i$ is its largest eigenvalue. $A^TA$ has a unique positive definite square root with the eigenvalues $sqrtlambda_i$. Considering only this PD square root matrix, Nocedal's definition is correct. But there can be other square root matrices of $A^TA$ as well, for which different eigenvalues are the largest. And if $A^TA$ has repeating eigenvalues, it will have infinitely many square roots. Hence I think there is an ambiguity in the Nocedal's definition. Am I missing something here? How can be the book's definition correct?
linear-algebra matrices norm matrix-norms spectral-norm
$endgroup$
3
$begingroup$
The function $M mapsto M^1/2$ over the set of positive symmetric matrices is usually implicitely defined such that $M^1/2$ is also positive. Like $x mapsto sqrtx$ is defined as the positive solution of $y=x^2$.
$endgroup$
– nicomezi
Dec 18 '18 at 9:19
1
$begingroup$
I would not take those formulas as definitions of the $ell_1, ell_2$, and $ell_infty$ matrix norms. There is one single definition that works in all three cases: the operator norm induced by a vector norm $| cdot |$ is defined by $| A | = sup_x neq 0 | Ax| / | x|$. The formulas listed are then a consequence of this definition.
$endgroup$
– littleO
Dec 18 '18 at 11:55
add a comment |
$begingroup$
I am following Nocedal and Wright's Numerical Optimization book for self study. In the Appendix section of the book, the following matrix norms are defined:
They defined the $l2$ norm of the matrix $A$ as the largest eigenvalue of $(A^TA)^1/2$.
But I have also seen the following definition:
$||A||_2 =max_i:n sqrtlambda_i$ where $lambda_i$ is the i. eigenvalue of the matrix $A^TA$.
(source: http://www.maths.lth.se/na/courses/FMN081/FMN081-06/lecture6.pdf)
I am not sure how these two definitions are equal. $A^TA$ is a symmetric positive definite matrix, hence it has positive eigenvalues. Assume that $lambda_i$ is its largest eigenvalue. $A^TA$ has a unique positive definite square root with the eigenvalues $sqrtlambda_i$. Considering only this PD square root matrix, Nocedal's definition is correct. But there can be other square root matrices of $A^TA$ as well, for which different eigenvalues are the largest. And if $A^TA$ has repeating eigenvalues, it will have infinitely many square roots. Hence I think there is an ambiguity in the Nocedal's definition. Am I missing something here? How can be the book's definition correct?
linear-algebra matrices norm matrix-norms spectral-norm
$endgroup$
I am following Nocedal and Wright's Numerical Optimization book for self study. In the Appendix section of the book, the following matrix norms are defined:
They defined the $l2$ norm of the matrix $A$ as the largest eigenvalue of $(A^TA)^1/2$.
But I have also seen the following definition:
$||A||_2 =max_i:n sqrtlambda_i$ where $lambda_i$ is the i. eigenvalue of the matrix $A^TA$.
(source: http://www.maths.lth.se/na/courses/FMN081/FMN081-06/lecture6.pdf)
I am not sure how these two definitions are equal. $A^TA$ is a symmetric positive definite matrix, hence it has positive eigenvalues. Assume that $lambda_i$ is its largest eigenvalue. $A^TA$ has a unique positive definite square root with the eigenvalues $sqrtlambda_i$. Considering only this PD square root matrix, Nocedal's definition is correct. But there can be other square root matrices of $A^TA$ as well, for which different eigenvalues are the largest. And if $A^TA$ has repeating eigenvalues, it will have infinitely many square roots. Hence I think there is an ambiguity in the Nocedal's definition. Am I missing something here? How can be the book's definition correct?
linear-algebra matrices norm matrix-norms spectral-norm
linear-algebra matrices norm matrix-norms spectral-norm
edited Mar 31 at 7:39
Rodrigo de Azevedo
13.2k41962
13.2k41962
asked Dec 18 '18 at 9:07
Ufuk Can BiciciUfuk Can Bicici
1,24711127
1,24711127
3
$begingroup$
The function $M mapsto M^1/2$ over the set of positive symmetric matrices is usually implicitely defined such that $M^1/2$ is also positive. Like $x mapsto sqrtx$ is defined as the positive solution of $y=x^2$.
$endgroup$
– nicomezi
Dec 18 '18 at 9:19
1
$begingroup$
I would not take those formulas as definitions of the $ell_1, ell_2$, and $ell_infty$ matrix norms. There is one single definition that works in all three cases: the operator norm induced by a vector norm $| cdot |$ is defined by $| A | = sup_x neq 0 | Ax| / | x|$. The formulas listed are then a consequence of this definition.
$endgroup$
– littleO
Dec 18 '18 at 11:55
add a comment |
3
$begingroup$
The function $M mapsto M^1/2$ over the set of positive symmetric matrices is usually implicitely defined such that $M^1/2$ is also positive. Like $x mapsto sqrtx$ is defined as the positive solution of $y=x^2$.
$endgroup$
– nicomezi
Dec 18 '18 at 9:19
1
$begingroup$
I would not take those formulas as definitions of the $ell_1, ell_2$, and $ell_infty$ matrix norms. There is one single definition that works in all three cases: the operator norm induced by a vector norm $| cdot |$ is defined by $| A | = sup_x neq 0 | Ax| / | x|$. The formulas listed are then a consequence of this definition.
$endgroup$
– littleO
Dec 18 '18 at 11:55
3
3
$begingroup$
The function $M mapsto M^1/2$ over the set of positive symmetric matrices is usually implicitely defined such that $M^1/2$ is also positive. Like $x mapsto sqrtx$ is defined as the positive solution of $y=x^2$.
$endgroup$
– nicomezi
Dec 18 '18 at 9:19
$begingroup$
The function $M mapsto M^1/2$ over the set of positive symmetric matrices is usually implicitely defined such that $M^1/2$ is also positive. Like $x mapsto sqrtx$ is defined as the positive solution of $y=x^2$.
$endgroup$
– nicomezi
Dec 18 '18 at 9:19
1
1
$begingroup$
I would not take those formulas as definitions of the $ell_1, ell_2$, and $ell_infty$ matrix norms. There is one single definition that works in all three cases: the operator norm induced by a vector norm $| cdot |$ is defined by $| A | = sup_x neq 0 | Ax| / | x|$. The formulas listed are then a consequence of this definition.
$endgroup$
– littleO
Dec 18 '18 at 11:55
$begingroup$
I would not take those formulas as definitions of the $ell_1, ell_2$, and $ell_infty$ matrix norms. There is one single definition that works in all three cases: the operator norm induced by a vector norm $| cdot |$ is defined by $| A | = sup_x neq 0 | Ax| / | x|$. The formulas listed are then a consequence of this definition.
$endgroup$
– littleO
Dec 18 '18 at 11:55
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
To avoid any ambiguity in the definition of the square root of a matrix, it is best to start from $ell^2$ norm of a matrix as the induced norm / operator norm coming from the $ell^2$ norm of the vector spaces. So in your case it seems that $Ain mathbbR^mtimes n$. Then, it holds by the definition of the operator norm
$$
lVert A rVert_2 = lVert A rVert_ell^2(mathbbR^n) to ell^2(mathbbR^m)
= sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)lVert x rVert_ell^2(mathbbR^n)
$$
By taking the square and expanding the norm to the $ell^2$-scalar product, one arrives at the Rayleigh quotient of $A^T A$
$$
lVert A rVert_2^2 = sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)^2lVert x rVert_ell^2(mathbbR^n)^2 = sup_x in mathbbR^n frac langle x, A^T A xrangle_ell^2(mathbbR^m)langle x , xrangle_ell^2(mathbbR^n) = lambda_max(A^T A) .
$$
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3044929%2fl2-norm-of-a-matrix-is-this-statement-true%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
To avoid any ambiguity in the definition of the square root of a matrix, it is best to start from $ell^2$ norm of a matrix as the induced norm / operator norm coming from the $ell^2$ norm of the vector spaces. So in your case it seems that $Ain mathbbR^mtimes n$. Then, it holds by the definition of the operator norm
$$
lVert A rVert_2 = lVert A rVert_ell^2(mathbbR^n) to ell^2(mathbbR^m)
= sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)lVert x rVert_ell^2(mathbbR^n)
$$
By taking the square and expanding the norm to the $ell^2$-scalar product, one arrives at the Rayleigh quotient of $A^T A$
$$
lVert A rVert_2^2 = sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)^2lVert x rVert_ell^2(mathbbR^n)^2 = sup_x in mathbbR^n frac langle x, A^T A xrangle_ell^2(mathbbR^m)langle x , xrangle_ell^2(mathbbR^n) = lambda_max(A^T A) .
$$
$endgroup$
add a comment |
$begingroup$
To avoid any ambiguity in the definition of the square root of a matrix, it is best to start from $ell^2$ norm of a matrix as the induced norm / operator norm coming from the $ell^2$ norm of the vector spaces. So in your case it seems that $Ain mathbbR^mtimes n$. Then, it holds by the definition of the operator norm
$$
lVert A rVert_2 = lVert A rVert_ell^2(mathbbR^n) to ell^2(mathbbR^m)
= sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)lVert x rVert_ell^2(mathbbR^n)
$$
By taking the square and expanding the norm to the $ell^2$-scalar product, one arrives at the Rayleigh quotient of $A^T A$
$$
lVert A rVert_2^2 = sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)^2lVert x rVert_ell^2(mathbbR^n)^2 = sup_x in mathbbR^n frac langle x, A^T A xrangle_ell^2(mathbbR^m)langle x , xrangle_ell^2(mathbbR^n) = lambda_max(A^T A) .
$$
$endgroup$
add a comment |
$begingroup$
To avoid any ambiguity in the definition of the square root of a matrix, it is best to start from $ell^2$ norm of a matrix as the induced norm / operator norm coming from the $ell^2$ norm of the vector spaces. So in your case it seems that $Ain mathbbR^mtimes n$. Then, it holds by the definition of the operator norm
$$
lVert A rVert_2 = lVert A rVert_ell^2(mathbbR^n) to ell^2(mathbbR^m)
= sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)lVert x rVert_ell^2(mathbbR^n)
$$
By taking the square and expanding the norm to the $ell^2$-scalar product, one arrives at the Rayleigh quotient of $A^T A$
$$
lVert A rVert_2^2 = sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)^2lVert x rVert_ell^2(mathbbR^n)^2 = sup_x in mathbbR^n frac langle x, A^T A xrangle_ell^2(mathbbR^m)langle x , xrangle_ell^2(mathbbR^n) = lambda_max(A^T A) .
$$
$endgroup$
To avoid any ambiguity in the definition of the square root of a matrix, it is best to start from $ell^2$ norm of a matrix as the induced norm / operator norm coming from the $ell^2$ norm of the vector spaces. So in your case it seems that $Ain mathbbR^mtimes n$. Then, it holds by the definition of the operator norm
$$
lVert A rVert_2 = lVert A rVert_ell^2(mathbbR^n) to ell^2(mathbbR^m)
= sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)lVert x rVert_ell^2(mathbbR^n)
$$
By taking the square and expanding the norm to the $ell^2$-scalar product, one arrives at the Rayleigh quotient of $A^T A$
$$
lVert A rVert_2^2 = sup_xin mathbbR^n frac lVert A x rVert_ell^2(mathbbR^m)^2lVert x rVert_ell^2(mathbbR^n)^2 = sup_x in mathbbR^n frac langle x, A^T A xrangle_ell^2(mathbbR^m)langle x , xrangle_ell^2(mathbbR^n) = lambda_max(A^T A) .
$$
edited Dec 18 '18 at 13:04
littleO
30.5k649111
30.5k649111
answered Dec 18 '18 at 11:37
André SchlichtingAndré Schlichting
1413
1413
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3044929%2fl2-norm-of-a-matrix-is-this-statement-true%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
$begingroup$
The function $M mapsto M^1/2$ over the set of positive symmetric matrices is usually implicitely defined such that $M^1/2$ is also positive. Like $x mapsto sqrtx$ is defined as the positive solution of $y=x^2$.
$endgroup$
– nicomezi
Dec 18 '18 at 9:19
1
$begingroup$
I would not take those formulas as definitions of the $ell_1, ell_2$, and $ell_infty$ matrix norms. There is one single definition that works in all three cases: the operator norm induced by a vector norm $| cdot |$ is defined by $| A | = sup_x neq 0 | Ax| / | x|$. The formulas listed are then a consequence of this definition.
$endgroup$
– littleO
Dec 18 '18 at 11:55