Diagonalization/fixed point lemma in logic vs computability theoryProving the free occurrence of a variable is primitive recursiveUniqueness of super godel numbers of $varphi$ and $neg varphi$Unpacking the Diagonal LemmaExpressibility; Incompleteness of Peano ArithmeticFixed points in computability and logicDiagonalizationAbout the disprovability of the Gödel SentenceMathLogic: $Tvdash Prv^**(ulcornervarphiurcorner)rightarrow neg Prv^**(ulcornernegvarphiurcorner)$Godel's Diagonalization Lemma As Application Of Lawvere's Fixed Point TheoremWhat's the difference between fixed point lemma and diagonalization lemma

Is there a name of the flying bionic bird?

Unbreakable Formation vs. Cry of the Carnarium

Eliminate empty elements from a list with a specific pattern

How to make payment on the internet without leaving a money trail?

What to wear for invited talk in Canada

Finding files for which a command fails

Email Account under attack (really) - anything I can do?

What does 'script /dev/null' do?

Does bootstrapped regression allow for inference?

Why do UK politicians seemingly ignore opinion polls on Brexit?

aging parents with no investments

What does "enim et" mean?

Why airport relocation isn't done gradually?

Is it wise to focus on putting odd beats on left when playing double bass drums?

extract characters between two commas?

Why is the design of haulage companies so “special”?

Pristine Bit Checking

Does the average primeness of natural numbers tend to zero?

What does it exactly mean if a random variable follows a distribution

Is "plugging out" electronic devices an American expression?

Calculate Levenshtein distance between two strings in Python

Ideas for 3rd eye abilities

Manga about a female worker who got dragged into another world together with this high school girl and she was just told she's not needed anymore

Does it makes sense to buy a new cycle to learn riding?



Diagonalization/fixed point lemma in logic vs computability theory


Proving the free occurrence of a variable is primitive recursiveUniqueness of super godel numbers of $varphi$ and $neg varphi$Unpacking the Diagonal LemmaExpressibility; Incompleteness of Peano ArithmeticFixed points in computability and logicDiagonalizationAbout the disprovability of the Gödel SentenceMathLogic: $Tvdash Prv^**(ulcornervarphiurcorner)rightarrow neg Prv^**(ulcornernegvarphiurcorner)$Godel's Diagonalization Lemma As Application Of Lawvere's Fixed Point TheoremWhat's the difference between fixed point lemma and diagonalization lemma













2












$begingroup$


The fixed point/recursion theorem in computability theory and the diagonalization lemma in logic are really similar and the standard proofs of these theorems can be mapped in a one-to-one way (I tried to show this on this page and think I mostly succeeded). However, my gripe with the standard proofs is that they are too magical and don't reveal how they are discovered (see here for similar complaints).



Looking around, I've found two proofs that reveal better how one might actually discover these theorems. The problem is that these discovery methods seem to only work for one of the theorems and not the other, which really bugs me. Since the two theorems are so similar and their standard proofs are essentially the same, I have the sense that a "more intuitive" proof ought to also work for both theorems rather than just one at a time.



Here are the two proofs that seem more intuitive to me:



  • In this answer on MathOverflow, Linda Brown Westrick motivates the proof of the diagonalization lemma by first showing that there is no formula $D$ with the property that for all formulas $varphi$, we have $D(ulcorner varphi urcorner) iff varphi(ulcorner varphi urcorner)$. This works in logic, but when we try to replicate the proof with partial recursive functions, the obvious candidate $d$ defined by $d(x) := varphi_x(x)$ turns out to exist and we can't diagonalize out of the class, thanks to having undefined values.


  • In "Diagonalization and the recursion theorem", James C. Owings, Jr. gives a proof of the recursion theorem in computability theory (see the first two pages of the paper). Taking inspiration, one might construct a matrix with entries $A_j(ulcorner A_kurcorner)$ ($j$th row, $k$th column), where $A_n$ is the $n$th formula. If the given formula is $F$ (for which we want to find a fixed point $A iff F(ulcorner Aurcorner)$) then we might try to define the mapping $alpha$ by $alpha(A_j(ulcorner A_kurcorner)) := F(ulcorner A_j(ulcorner A_kurcorner) urcorner)$. There are now some problems: (1) the images of the rows of the matrix under $alpha$ aren't in the same form as the original rows, because the corner quotes "eat up" the whole input rather than being composed like in the partial recursive function setting, where $varphi_f(h(n))$ can be written as $varphi_(fcirc h)(n)$; (2) the method given in the Owings paper finds an exact fixed point, whereas in logic we only need a fixed point up to logical equivalence.


Is there some way to get one of the above proofs to work in the setting of the other theorem?










share|cite|improve this question









$endgroup$
















    2












    $begingroup$


    The fixed point/recursion theorem in computability theory and the diagonalization lemma in logic are really similar and the standard proofs of these theorems can be mapped in a one-to-one way (I tried to show this on this page and think I mostly succeeded). However, my gripe with the standard proofs is that they are too magical and don't reveal how they are discovered (see here for similar complaints).



    Looking around, I've found two proofs that reveal better how one might actually discover these theorems. The problem is that these discovery methods seem to only work for one of the theorems and not the other, which really bugs me. Since the two theorems are so similar and their standard proofs are essentially the same, I have the sense that a "more intuitive" proof ought to also work for both theorems rather than just one at a time.



    Here are the two proofs that seem more intuitive to me:



    • In this answer on MathOverflow, Linda Brown Westrick motivates the proof of the diagonalization lemma by first showing that there is no formula $D$ with the property that for all formulas $varphi$, we have $D(ulcorner varphi urcorner) iff varphi(ulcorner varphi urcorner)$. This works in logic, but when we try to replicate the proof with partial recursive functions, the obvious candidate $d$ defined by $d(x) := varphi_x(x)$ turns out to exist and we can't diagonalize out of the class, thanks to having undefined values.


    • In "Diagonalization and the recursion theorem", James C. Owings, Jr. gives a proof of the recursion theorem in computability theory (see the first two pages of the paper). Taking inspiration, one might construct a matrix with entries $A_j(ulcorner A_kurcorner)$ ($j$th row, $k$th column), where $A_n$ is the $n$th formula. If the given formula is $F$ (for which we want to find a fixed point $A iff F(ulcorner Aurcorner)$) then we might try to define the mapping $alpha$ by $alpha(A_j(ulcorner A_kurcorner)) := F(ulcorner A_j(ulcorner A_kurcorner) urcorner)$. There are now some problems: (1) the images of the rows of the matrix under $alpha$ aren't in the same form as the original rows, because the corner quotes "eat up" the whole input rather than being composed like in the partial recursive function setting, where $varphi_f(h(n))$ can be written as $varphi_(fcirc h)(n)$; (2) the method given in the Owings paper finds an exact fixed point, whereas in logic we only need a fixed point up to logical equivalence.


    Is there some way to get one of the above proofs to work in the setting of the other theorem?










    share|cite|improve this question









    $endgroup$














      2












      2








      2





      $begingroup$


      The fixed point/recursion theorem in computability theory and the diagonalization lemma in logic are really similar and the standard proofs of these theorems can be mapped in a one-to-one way (I tried to show this on this page and think I mostly succeeded). However, my gripe with the standard proofs is that they are too magical and don't reveal how they are discovered (see here for similar complaints).



      Looking around, I've found two proofs that reveal better how one might actually discover these theorems. The problem is that these discovery methods seem to only work for one of the theorems and not the other, which really bugs me. Since the two theorems are so similar and their standard proofs are essentially the same, I have the sense that a "more intuitive" proof ought to also work for both theorems rather than just one at a time.



      Here are the two proofs that seem more intuitive to me:



      • In this answer on MathOverflow, Linda Brown Westrick motivates the proof of the diagonalization lemma by first showing that there is no formula $D$ with the property that for all formulas $varphi$, we have $D(ulcorner varphi urcorner) iff varphi(ulcorner varphi urcorner)$. This works in logic, but when we try to replicate the proof with partial recursive functions, the obvious candidate $d$ defined by $d(x) := varphi_x(x)$ turns out to exist and we can't diagonalize out of the class, thanks to having undefined values.


      • In "Diagonalization and the recursion theorem", James C. Owings, Jr. gives a proof of the recursion theorem in computability theory (see the first two pages of the paper). Taking inspiration, one might construct a matrix with entries $A_j(ulcorner A_kurcorner)$ ($j$th row, $k$th column), where $A_n$ is the $n$th formula. If the given formula is $F$ (for which we want to find a fixed point $A iff F(ulcorner Aurcorner)$) then we might try to define the mapping $alpha$ by $alpha(A_j(ulcorner A_kurcorner)) := F(ulcorner A_j(ulcorner A_kurcorner) urcorner)$. There are now some problems: (1) the images of the rows of the matrix under $alpha$ aren't in the same form as the original rows, because the corner quotes "eat up" the whole input rather than being composed like in the partial recursive function setting, where $varphi_f(h(n))$ can be written as $varphi_(fcirc h)(n)$; (2) the method given in the Owings paper finds an exact fixed point, whereas in logic we only need a fixed point up to logical equivalence.


      Is there some way to get one of the above proofs to work in the setting of the other theorem?










      share|cite|improve this question









      $endgroup$




      The fixed point/recursion theorem in computability theory and the diagonalization lemma in logic are really similar and the standard proofs of these theorems can be mapped in a one-to-one way (I tried to show this on this page and think I mostly succeeded). However, my gripe with the standard proofs is that they are too magical and don't reveal how they are discovered (see here for similar complaints).



      Looking around, I've found two proofs that reveal better how one might actually discover these theorems. The problem is that these discovery methods seem to only work for one of the theorems and not the other, which really bugs me. Since the two theorems are so similar and their standard proofs are essentially the same, I have the sense that a "more intuitive" proof ought to also work for both theorems rather than just one at a time.



      Here are the two proofs that seem more intuitive to me:



      • In this answer on MathOverflow, Linda Brown Westrick motivates the proof of the diagonalization lemma by first showing that there is no formula $D$ with the property that for all formulas $varphi$, we have $D(ulcorner varphi urcorner) iff varphi(ulcorner varphi urcorner)$. This works in logic, but when we try to replicate the proof with partial recursive functions, the obvious candidate $d$ defined by $d(x) := varphi_x(x)$ turns out to exist and we can't diagonalize out of the class, thanks to having undefined values.


      • In "Diagonalization and the recursion theorem", James C. Owings, Jr. gives a proof of the recursion theorem in computability theory (see the first two pages of the paper). Taking inspiration, one might construct a matrix with entries $A_j(ulcorner A_kurcorner)$ ($j$th row, $k$th column), where $A_n$ is the $n$th formula. If the given formula is $F$ (for which we want to find a fixed point $A iff F(ulcorner Aurcorner)$) then we might try to define the mapping $alpha$ by $alpha(A_j(ulcorner A_kurcorner)) := F(ulcorner A_j(ulcorner A_kurcorner) urcorner)$. There are now some problems: (1) the images of the rows of the matrix under $alpha$ aren't in the same form as the original rows, because the corner quotes "eat up" the whole input rather than being composed like in the partial recursive function setting, where $varphi_f(h(n))$ can be written as $varphi_(fcirc h)(n)$; (2) the method given in the Owings paper finds an exact fixed point, whereas in logic we only need a fixed point up to logical equivalence.


      Is there some way to get one of the above proofs to work in the setting of the other theorem?







      logic computability






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Mar 30 at 3:30









      riceissariceissa

      39448




      39448




















          0






          active

          oldest

          votes












          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167898%2fdiagonalization-fixed-point-lemma-in-logic-vs-computability-theory%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167898%2fdiagonalization-fixed-point-lemma-in-logic-vs-computability-theory%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Triangular numbers and gcdProving sum of a set is $0 pmod n$ if $n$ is odd, or $fracn2 pmod n$ if $n$ is even?Is greatest common divisor of two numbers really their smallest linear combination?GCD, LCM RelationshipProve a set of nonnegative integers with greatest common divisor 1 and closed under addition has all but finite many nonnegative integers.all pairs of a and b in an equation containing gcdTriangular Numbers Modulo $k$ - Hit All Values?Understanding the Existence and Uniqueness of the GCDGCD and LCM with logical symbolsThe greatest common divisor of two positive integers less than 100 is equal to 3. Their least common multiple is twelve times one of the integers.Suppose that for all integers $x$, $x|a$ and $x|b$ if and only if $x|c$. Then $c = gcd(a,b)$Which is the gcd of 2 numbers which are multiplied and the result is 600000?

          Ingelân Ynhâld Etymology | Geografy | Skiednis | Polityk en bestjoer | Ekonomy | Demografy | Kultuer | Klimaat | Sjoch ek | Keppelings om utens | Boarnen, noaten en referinsjes Navigaasjemenuwww.gov.ukOffisjele webside fan it regear fan it Feriene KeninkrykOffisjele webside fan it Britske FerkearsburoNederlânsktalige ynformaasje fan it Britske FerkearsburoOffisjele webside fan English Heritage, de organisaasje dy't him ynset foar it behâld fan it Ingelske kultuergoedYnwennertallen fan alle Britske stêden út 'e folkstelling fan 2011Notes en References, op dizze sideEngland

          Հադիս Բովանդակություն Անվանում և նշանակություն | Դասակարգում | Աղբյուրներ | Նավարկման ցանկ