difference of two orthogonal projections is orthogonal projectionIf A is both orthogonal and a orthogonal projector. What can you then conlcude about A?If $P$ is the standard matrix an orthogonal projection, prove that so is $P^k$.Pseudoinverse and orthogonal projectionOrthogonal projection matrix proofFinding orthogonal projections onto $1$ (co)-dimensional subspaces of $mathbb R^n$Why are orthogonal projection matrices not … orthogonal?Orthogonal Projections- PropertiesPositive definite matrix for projectionShow that a symmetric and idempotent matrix $P$ is the projection matrix onto some subspace.Determine if a matrix is an orthogonal projection matrix

How long does it take to type this?

N.B. ligature in Latex

New order #4: World

Why is this code 6.5x slower with optimizations enabled?

declaring a variable twice in IIFE

What is the command to reset a PC without deleting any files

Can I interfere when another PC is about to be attacked?

Are tax years 2016 & 2017 back taxes deductible for tax year 2018?

Example of a relative pronoun

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

Why is "Reports" in sentence down without "The"

Possibly bubble sort algorithm

Could a US political party gain complete control over the government by removing checks & balances?

Can I make popcorn with any corn?

XeLaTeX and pdfLaTeX ignore hyphenation

What do you call a Matrix-like slowdown and camera movement effect?

Patience, young "Padovan"

Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)

Circuitry of TV splitters

Draw simple lines in Inkscape

Extreme, but not acceptable situation and I can't start the work tomorrow morning

How to report a triplet of septets in NMR tabulation?

How can the DM most effectively choose 1 out of an odd number of players to be targeted by an attack or effect?

The magic money tree problem



difference of two orthogonal projections is orthogonal projection


If A is both orthogonal and a orthogonal projector. What can you then conlcude about A?If $P$ is the standard matrix an orthogonal projection, prove that so is $P^k$.Pseudoinverse and orthogonal projectionOrthogonal projection matrix proofFinding orthogonal projections onto $1$ (co)-dimensional subspaces of $mathbb R^n$Why are orthogonal projection matrices not … orthogonal?Orthogonal Projections- PropertiesPositive definite matrix for projectionShow that a symmetric and idempotent matrix $P$ is the projection matrix onto some subspace.Determine if a matrix is an orthogonal projection matrix













3












$begingroup$


Premise: I have an $n × q$ matrix $X$ and a $q × a$ matrix $C$ with $n > q > a$.



I'm interested in the structure of the matrix
$$
M = X X^+ - X_0 X_0^+
$$

where the superscript $^+$ indicates the Moore–Penrose pseudoinverse and
$$
X_0 = X (I_q - C C^+).
$$



I assume that $X$ is of full column rank and therefore $X^+ = (X' X)^-1 X$ (where $'$ indicates the transpose).




Background: $X$ is the design matrix of a linear model, $C$ is a contrast, $X_0$ is a reduced design matrix, and $M$ occurs in the definition of standard test statistics.



$M$ is the difference of two orthogonal projection matrices, where the second projects into a subspace of the subspace the first projects into. This makes the difference an orthogonal projection matrix itself (symmetric and idempotent), which means it has a representation
$$
M = X_Delta X_Delta^+.
$$



Question: How do I obtain $X_Delta$?




user1551 has correctly pointed out in an answer that $X_Delta = M$ itself fulfills the equation. However, I'm looking for a "version" of $X$, meaning an $n times q$ matrix of rank $a$.



My approach: I am guessing that
$$
X_Delta = X - X_0 X_0^+ X,
$$

and this seems to be confirmed by numerical tests. But I am unable to come up with a proof, i.e. to show that
$$
(X - X_0 X_0^+ X) (X - X_0 X_0^+ X)^+ = X X^+ - X_0 X_0^+.
$$



The problem is how to deal with the pseudoinverse of a difference. One can write
$$
X_Delta = (I_n - X_0 X_0^+) X,
$$

and according to Wikipedia, in the pseudoinverse of a product where one factor is an orthogonal projection, the orthogonal projection can be redundantly multiplied to the opposite side, meaning here
$$
X_Delta^+ = [(I_n - X_0 X_0^+) X]^+ = [(I_n - X_0 X_0^+) X]^+ (I_n - X_0 X_0^+) = X_Delta^+ (I_n - X_0 X_0^+),
$$

but that doesn't seem to help.



I can prove that $M$ is symmetric and idempotent, using the relations
$$
X X^+ X_0 = X_0
quad textand quad
X_0 X_0^+ X X^+ = X_0 X_0^+,
$$

which derive from the definition of $X_0$ and the properties of the pseudoinverse. I can also show that
$$
X X_0^+ = X_0 X_0^+
$$

using the property of the pseudoinverse of a product involving an orthogonal projection (see above). But none of that helps either.










share|cite|improve this question











$endgroup$
















    3












    $begingroup$


    Premise: I have an $n × q$ matrix $X$ and a $q × a$ matrix $C$ with $n > q > a$.



    I'm interested in the structure of the matrix
    $$
    M = X X^+ - X_0 X_0^+
    $$

    where the superscript $^+$ indicates the Moore–Penrose pseudoinverse and
    $$
    X_0 = X (I_q - C C^+).
    $$



    I assume that $X$ is of full column rank and therefore $X^+ = (X' X)^-1 X$ (where $'$ indicates the transpose).




    Background: $X$ is the design matrix of a linear model, $C$ is a contrast, $X_0$ is a reduced design matrix, and $M$ occurs in the definition of standard test statistics.



    $M$ is the difference of two orthogonal projection matrices, where the second projects into a subspace of the subspace the first projects into. This makes the difference an orthogonal projection matrix itself (symmetric and idempotent), which means it has a representation
    $$
    M = X_Delta X_Delta^+.
    $$



    Question: How do I obtain $X_Delta$?




    user1551 has correctly pointed out in an answer that $X_Delta = M$ itself fulfills the equation. However, I'm looking for a "version" of $X$, meaning an $n times q$ matrix of rank $a$.



    My approach: I am guessing that
    $$
    X_Delta = X - X_0 X_0^+ X,
    $$

    and this seems to be confirmed by numerical tests. But I am unable to come up with a proof, i.e. to show that
    $$
    (X - X_0 X_0^+ X) (X - X_0 X_0^+ X)^+ = X X^+ - X_0 X_0^+.
    $$



    The problem is how to deal with the pseudoinverse of a difference. One can write
    $$
    X_Delta = (I_n - X_0 X_0^+) X,
    $$

    and according to Wikipedia, in the pseudoinverse of a product where one factor is an orthogonal projection, the orthogonal projection can be redundantly multiplied to the opposite side, meaning here
    $$
    X_Delta^+ = [(I_n - X_0 X_0^+) X]^+ = [(I_n - X_0 X_0^+) X]^+ (I_n - X_0 X_0^+) = X_Delta^+ (I_n - X_0 X_0^+),
    $$

    but that doesn't seem to help.



    I can prove that $M$ is symmetric and idempotent, using the relations
    $$
    X X^+ X_0 = X_0
    quad textand quad
    X_0 X_0^+ X X^+ = X_0 X_0^+,
    $$

    which derive from the definition of $X_0$ and the properties of the pseudoinverse. I can also show that
    $$
    X X_0^+ = X_0 X_0^+
    $$

    using the property of the pseudoinverse of a product involving an orthogonal projection (see above). But none of that helps either.










    share|cite|improve this question











    $endgroup$














      3












      3








      3





      $begingroup$


      Premise: I have an $n × q$ matrix $X$ and a $q × a$ matrix $C$ with $n > q > a$.



      I'm interested in the structure of the matrix
      $$
      M = X X^+ - X_0 X_0^+
      $$

      where the superscript $^+$ indicates the Moore–Penrose pseudoinverse and
      $$
      X_0 = X (I_q - C C^+).
      $$



      I assume that $X$ is of full column rank and therefore $X^+ = (X' X)^-1 X$ (where $'$ indicates the transpose).




      Background: $X$ is the design matrix of a linear model, $C$ is a contrast, $X_0$ is a reduced design matrix, and $M$ occurs in the definition of standard test statistics.



      $M$ is the difference of two orthogonal projection matrices, where the second projects into a subspace of the subspace the first projects into. This makes the difference an orthogonal projection matrix itself (symmetric and idempotent), which means it has a representation
      $$
      M = X_Delta X_Delta^+.
      $$



      Question: How do I obtain $X_Delta$?




      user1551 has correctly pointed out in an answer that $X_Delta = M$ itself fulfills the equation. However, I'm looking for a "version" of $X$, meaning an $n times q$ matrix of rank $a$.



      My approach: I am guessing that
      $$
      X_Delta = X - X_0 X_0^+ X,
      $$

      and this seems to be confirmed by numerical tests. But I am unable to come up with a proof, i.e. to show that
      $$
      (X - X_0 X_0^+ X) (X - X_0 X_0^+ X)^+ = X X^+ - X_0 X_0^+.
      $$



      The problem is how to deal with the pseudoinverse of a difference. One can write
      $$
      X_Delta = (I_n - X_0 X_0^+) X,
      $$

      and according to Wikipedia, in the pseudoinverse of a product where one factor is an orthogonal projection, the orthogonal projection can be redundantly multiplied to the opposite side, meaning here
      $$
      X_Delta^+ = [(I_n - X_0 X_0^+) X]^+ = [(I_n - X_0 X_0^+) X]^+ (I_n - X_0 X_0^+) = X_Delta^+ (I_n - X_0 X_0^+),
      $$

      but that doesn't seem to help.



      I can prove that $M$ is symmetric and idempotent, using the relations
      $$
      X X^+ X_0 = X_0
      quad textand quad
      X_0 X_0^+ X X^+ = X_0 X_0^+,
      $$

      which derive from the definition of $X_0$ and the properties of the pseudoinverse. I can also show that
      $$
      X X_0^+ = X_0 X_0^+
      $$

      using the property of the pseudoinverse of a product involving an orthogonal projection (see above). But none of that helps either.










      share|cite|improve this question











      $endgroup$




      Premise: I have an $n × q$ matrix $X$ and a $q × a$ matrix $C$ with $n > q > a$.



      I'm interested in the structure of the matrix
      $$
      M = X X^+ - X_0 X_0^+
      $$

      where the superscript $^+$ indicates the Moore–Penrose pseudoinverse and
      $$
      X_0 = X (I_q - C C^+).
      $$



      I assume that $X$ is of full column rank and therefore $X^+ = (X' X)^-1 X$ (where $'$ indicates the transpose).




      Background: $X$ is the design matrix of a linear model, $C$ is a contrast, $X_0$ is a reduced design matrix, and $M$ occurs in the definition of standard test statistics.



      $M$ is the difference of two orthogonal projection matrices, where the second projects into a subspace of the subspace the first projects into. This makes the difference an orthogonal projection matrix itself (symmetric and idempotent), which means it has a representation
      $$
      M = X_Delta X_Delta^+.
      $$



      Question: How do I obtain $X_Delta$?




      user1551 has correctly pointed out in an answer that $X_Delta = M$ itself fulfills the equation. However, I'm looking for a "version" of $X$, meaning an $n times q$ matrix of rank $a$.



      My approach: I am guessing that
      $$
      X_Delta = X - X_0 X_0^+ X,
      $$

      and this seems to be confirmed by numerical tests. But I am unable to come up with a proof, i.e. to show that
      $$
      (X - X_0 X_0^+ X) (X - X_0 X_0^+ X)^+ = X X^+ - X_0 X_0^+.
      $$



      The problem is how to deal with the pseudoinverse of a difference. One can write
      $$
      X_Delta = (I_n - X_0 X_0^+) X,
      $$

      and according to Wikipedia, in the pseudoinverse of a product where one factor is an orthogonal projection, the orthogonal projection can be redundantly multiplied to the opposite side, meaning here
      $$
      X_Delta^+ = [(I_n - X_0 X_0^+) X]^+ = [(I_n - X_0 X_0^+) X]^+ (I_n - X_0 X_0^+) = X_Delta^+ (I_n - X_0 X_0^+),
      $$

      but that doesn't seem to help.



      I can prove that $M$ is symmetric and idempotent, using the relations
      $$
      X X^+ X_0 = X_0
      quad textand quad
      X_0 X_0^+ X X^+ = X_0 X_0^+,
      $$

      which derive from the definition of $X_0$ and the properties of the pseudoinverse. I can also show that
      $$
      X X_0^+ = X_0 X_0^+
      $$

      using the property of the pseudoinverse of a product involving an orthogonal projection (see above). But none of that helps either.







      linear-algebra pseudoinverse projection-matrices






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 2 days ago







      A. Donda

















      asked Mar 29 at 21:27









      A. DondaA. Donda

      1,116620




      1,116620




















          1 Answer
          1






          active

          oldest

          votes


















          1





          +50







          $begingroup$

          With your choice of $X_Delta$, $M$ is indeed equal to $X_Delta X_Delta^+$.



          Proof. Let $P=I-CC^+$. Note that the column space of $M=XX^+-(XP)(XP)^+$ is $operatornamecol(X)capoperatornamecol(XP)^perp$, while the column space of $X_Delta X_Delta^+$ is precisely the column space of $X_Delta=left[I-(XP)(XP)^+right]X$.



          Since $X_Delta=Xleft[I-P(XP)^+Xright]$, $operatornamecol(X_Delta)subseteqoperatornamecol(X)$. Also, since
          beginaligned
          (XP)^TX_Delta
          &=(X_Delta^TXP)^T\
          &=left[X^Tleft(I-(XP)(XP)^+right)XPright]^T\
          &=left[X^Tleft(XP-(XP)(XP)^+(XP)right)right]^T\
          &=left[X^Tleft(XP-XPright)right]^T=0,
          endaligned

          we also have $operatornamecol(X_Delta)subseteqoperatornamecol(XP)^perp$. Thus $operatornamecol(X_Delta)subseteqoperatornamecol(M)$.



          We now show that the reverse inclusion is also true. Pick any $vinoperatornamecol(M)=operatornamecol(X)capoperatornamecol(XP)^perp$. Since $vinoperatornamecol(X)$, it can be written as $Xb$ for some vector $b$. Thus
          $$
          X_Delta b=left[I-(XP)(XP)^+right]Xb=v-(XP)(XP)^+v.
          $$

          However, we also have $vinoperatornamecol(XP)^perp$. Therefore $(XP)(XP)^+v=0$ and in turn $X_Delta b=v$, meaning that $vinoperatornamecol(X_Delta)$.



          Thus $operatornamecol(X_Delta X_Delta^+)equivoperatornamecol(X_Delta)=operatornamecol(M)$. Hence $X_Delta X_Delta^+$ and $M$ must be equal, because they are orthogonal projections with identical column spaces.






          share|cite|improve this answer











          $endgroup$












          • $begingroup$
            Now it's clear. Thanks again!
            $endgroup$
            – A. Donda
            6 hours ago






          • 1




            $begingroup$
            @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
            $endgroup$
            – user1551
            6 hours ago











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167640%2fdifference-of-two-orthogonal-projections-is-orthogonal-projection%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1





          +50







          $begingroup$

          With your choice of $X_Delta$, $M$ is indeed equal to $X_Delta X_Delta^+$.



          Proof. Let $P=I-CC^+$. Note that the column space of $M=XX^+-(XP)(XP)^+$ is $operatornamecol(X)capoperatornamecol(XP)^perp$, while the column space of $X_Delta X_Delta^+$ is precisely the column space of $X_Delta=left[I-(XP)(XP)^+right]X$.



          Since $X_Delta=Xleft[I-P(XP)^+Xright]$, $operatornamecol(X_Delta)subseteqoperatornamecol(X)$. Also, since
          beginaligned
          (XP)^TX_Delta
          &=(X_Delta^TXP)^T\
          &=left[X^Tleft(I-(XP)(XP)^+right)XPright]^T\
          &=left[X^Tleft(XP-(XP)(XP)^+(XP)right)right]^T\
          &=left[X^Tleft(XP-XPright)right]^T=0,
          endaligned

          we also have $operatornamecol(X_Delta)subseteqoperatornamecol(XP)^perp$. Thus $operatornamecol(X_Delta)subseteqoperatornamecol(M)$.



          We now show that the reverse inclusion is also true. Pick any $vinoperatornamecol(M)=operatornamecol(X)capoperatornamecol(XP)^perp$. Since $vinoperatornamecol(X)$, it can be written as $Xb$ for some vector $b$. Thus
          $$
          X_Delta b=left[I-(XP)(XP)^+right]Xb=v-(XP)(XP)^+v.
          $$

          However, we also have $vinoperatornamecol(XP)^perp$. Therefore $(XP)(XP)^+v=0$ and in turn $X_Delta b=v$, meaning that $vinoperatornamecol(X_Delta)$.



          Thus $operatornamecol(X_Delta X_Delta^+)equivoperatornamecol(X_Delta)=operatornamecol(M)$. Hence $X_Delta X_Delta^+$ and $M$ must be equal, because they are orthogonal projections with identical column spaces.






          share|cite|improve this answer











          $endgroup$












          • $begingroup$
            Now it's clear. Thanks again!
            $endgroup$
            – A. Donda
            6 hours ago






          • 1




            $begingroup$
            @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
            $endgroup$
            – user1551
            6 hours ago















          1





          +50







          $begingroup$

          With your choice of $X_Delta$, $M$ is indeed equal to $X_Delta X_Delta^+$.



          Proof. Let $P=I-CC^+$. Note that the column space of $M=XX^+-(XP)(XP)^+$ is $operatornamecol(X)capoperatornamecol(XP)^perp$, while the column space of $X_Delta X_Delta^+$ is precisely the column space of $X_Delta=left[I-(XP)(XP)^+right]X$.



          Since $X_Delta=Xleft[I-P(XP)^+Xright]$, $operatornamecol(X_Delta)subseteqoperatornamecol(X)$. Also, since
          beginaligned
          (XP)^TX_Delta
          &=(X_Delta^TXP)^T\
          &=left[X^Tleft(I-(XP)(XP)^+right)XPright]^T\
          &=left[X^Tleft(XP-(XP)(XP)^+(XP)right)right]^T\
          &=left[X^Tleft(XP-XPright)right]^T=0,
          endaligned

          we also have $operatornamecol(X_Delta)subseteqoperatornamecol(XP)^perp$. Thus $operatornamecol(X_Delta)subseteqoperatornamecol(M)$.



          We now show that the reverse inclusion is also true. Pick any $vinoperatornamecol(M)=operatornamecol(X)capoperatornamecol(XP)^perp$. Since $vinoperatornamecol(X)$, it can be written as $Xb$ for some vector $b$. Thus
          $$
          X_Delta b=left[I-(XP)(XP)^+right]Xb=v-(XP)(XP)^+v.
          $$

          However, we also have $vinoperatornamecol(XP)^perp$. Therefore $(XP)(XP)^+v=0$ and in turn $X_Delta b=v$, meaning that $vinoperatornamecol(X_Delta)$.



          Thus $operatornamecol(X_Delta X_Delta^+)equivoperatornamecol(X_Delta)=operatornamecol(M)$. Hence $X_Delta X_Delta^+$ and $M$ must be equal, because they are orthogonal projections with identical column spaces.






          share|cite|improve this answer











          $endgroup$












          • $begingroup$
            Now it's clear. Thanks again!
            $endgroup$
            – A. Donda
            6 hours ago






          • 1




            $begingroup$
            @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
            $endgroup$
            – user1551
            6 hours ago













          1





          +50







          1





          +50



          1




          +50



          $begingroup$

          With your choice of $X_Delta$, $M$ is indeed equal to $X_Delta X_Delta^+$.



          Proof. Let $P=I-CC^+$. Note that the column space of $M=XX^+-(XP)(XP)^+$ is $operatornamecol(X)capoperatornamecol(XP)^perp$, while the column space of $X_Delta X_Delta^+$ is precisely the column space of $X_Delta=left[I-(XP)(XP)^+right]X$.



          Since $X_Delta=Xleft[I-P(XP)^+Xright]$, $operatornamecol(X_Delta)subseteqoperatornamecol(X)$. Also, since
          beginaligned
          (XP)^TX_Delta
          &=(X_Delta^TXP)^T\
          &=left[X^Tleft(I-(XP)(XP)^+right)XPright]^T\
          &=left[X^Tleft(XP-(XP)(XP)^+(XP)right)right]^T\
          &=left[X^Tleft(XP-XPright)right]^T=0,
          endaligned

          we also have $operatornamecol(X_Delta)subseteqoperatornamecol(XP)^perp$. Thus $operatornamecol(X_Delta)subseteqoperatornamecol(M)$.



          We now show that the reverse inclusion is also true. Pick any $vinoperatornamecol(M)=operatornamecol(X)capoperatornamecol(XP)^perp$. Since $vinoperatornamecol(X)$, it can be written as $Xb$ for some vector $b$. Thus
          $$
          X_Delta b=left[I-(XP)(XP)^+right]Xb=v-(XP)(XP)^+v.
          $$

          However, we also have $vinoperatornamecol(XP)^perp$. Therefore $(XP)(XP)^+v=0$ and in turn $X_Delta b=v$, meaning that $vinoperatornamecol(X_Delta)$.



          Thus $operatornamecol(X_Delta X_Delta^+)equivoperatornamecol(X_Delta)=operatornamecol(M)$. Hence $X_Delta X_Delta^+$ and $M$ must be equal, because they are orthogonal projections with identical column spaces.






          share|cite|improve this answer











          $endgroup$



          With your choice of $X_Delta$, $M$ is indeed equal to $X_Delta X_Delta^+$.



          Proof. Let $P=I-CC^+$. Note that the column space of $M=XX^+-(XP)(XP)^+$ is $operatornamecol(X)capoperatornamecol(XP)^perp$, while the column space of $X_Delta X_Delta^+$ is precisely the column space of $X_Delta=left[I-(XP)(XP)^+right]X$.



          Since $X_Delta=Xleft[I-P(XP)^+Xright]$, $operatornamecol(X_Delta)subseteqoperatornamecol(X)$. Also, since
          beginaligned
          (XP)^TX_Delta
          &=(X_Delta^TXP)^T\
          &=left[X^Tleft(I-(XP)(XP)^+right)XPright]^T\
          &=left[X^Tleft(XP-(XP)(XP)^+(XP)right)right]^T\
          &=left[X^Tleft(XP-XPright)right]^T=0,
          endaligned

          we also have $operatornamecol(X_Delta)subseteqoperatornamecol(XP)^perp$. Thus $operatornamecol(X_Delta)subseteqoperatornamecol(M)$.



          We now show that the reverse inclusion is also true. Pick any $vinoperatornamecol(M)=operatornamecol(X)capoperatornamecol(XP)^perp$. Since $vinoperatornamecol(X)$, it can be written as $Xb$ for some vector $b$. Thus
          $$
          X_Delta b=left[I-(XP)(XP)^+right]Xb=v-(XP)(XP)^+v.
          $$

          However, we also have $vinoperatornamecol(XP)^perp$. Therefore $(XP)(XP)^+v=0$ and in turn $X_Delta b=v$, meaning that $vinoperatornamecol(X_Delta)$.



          Thus $operatornamecol(X_Delta X_Delta^+)equivoperatornamecol(X_Delta)=operatornamecol(M)$. Hence $X_Delta X_Delta^+$ and $M$ must be equal, because they are orthogonal projections with identical column spaces.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 6 hours ago

























          answered 2 days ago









          user1551user1551

          74.1k566129




          74.1k566129











          • $begingroup$
            Now it's clear. Thanks again!
            $endgroup$
            – A. Donda
            6 hours ago






          • 1




            $begingroup$
            @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
            $endgroup$
            – user1551
            6 hours ago
















          • $begingroup$
            Now it's clear. Thanks again!
            $endgroup$
            – A. Donda
            6 hours ago






          • 1




            $begingroup$
            @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
            $endgroup$
            – user1551
            6 hours ago















          $begingroup$
          Now it's clear. Thanks again!
          $endgroup$
          – A. Donda
          6 hours ago




          $begingroup$
          Now it's clear. Thanks again!
          $endgroup$
          – A. Donda
          6 hours ago




          1




          1




          $begingroup$
          @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
          $endgroup$
          – user1551
          6 hours ago




          $begingroup$
          @A.Donda You are welcome. Note that if you need to verify the relation numerically, you need to use SVD and set thresholds to cut off the small singular values, otherwise the verification will likely fail because the calculation of pseudoinverse is not numerically stable.
          $endgroup$
          – user1551
          6 hours ago

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167640%2fdifference-of-two-orthogonal-projections-is-orthogonal-projection%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Triangular numbers and gcdProving sum of a set is $0 pmod n$ if $n$ is odd, or $fracn2 pmod n$ if $n$ is even?Is greatest common divisor of two numbers really their smallest linear combination?GCD, LCM RelationshipProve a set of nonnegative integers with greatest common divisor 1 and closed under addition has all but finite many nonnegative integers.all pairs of a and b in an equation containing gcdTriangular Numbers Modulo $k$ - Hit All Values?Understanding the Existence and Uniqueness of the GCDGCD and LCM with logical symbolsThe greatest common divisor of two positive integers less than 100 is equal to 3. Their least common multiple is twelve times one of the integers.Suppose that for all integers $x$, $x|a$ and $x|b$ if and only if $x|c$. Then $c = gcd(a,b)$Which is the gcd of 2 numbers which are multiplied and the result is 600000?

          Ingelân Ynhâld Etymology | Geografy | Skiednis | Polityk en bestjoer | Ekonomy | Demografy | Kultuer | Klimaat | Sjoch ek | Keppelings om utens | Boarnen, noaten en referinsjes Navigaasjemenuwww.gov.ukOffisjele webside fan it regear fan it Feriene KeninkrykOffisjele webside fan it Britske FerkearsburoNederlânsktalige ynformaasje fan it Britske FerkearsburoOffisjele webside fan English Heritage, de organisaasje dy't him ynset foar it behâld fan it Ingelske kultuergoedYnwennertallen fan alle Britske stêden út 'e folkstelling fan 2011Notes en References, op dizze sideEngland

          Հադիս Բովանդակություն Անվանում և նշանակություն | Դասակարգում | Աղբյուրներ | Նավարկման ցանկ