What is the purpose of regime bits in posit encoding?How many bits of difference in a relative error?How to interpret fractional number of bits of precisionHow do 24 significant bits give from 6 to 9 significant decimal digits?How do I round this binary number to the nearest even$t$ bits for the fraction of a float $barx$, then the distance between $barx$ and an adjacent float is no more than $2^e−t$Calculate the largets and smallest number fiven exp, bias, and fractConverting a number into IEEE floating point formatWhat is the maximum number of significant bits lost when the computer evaluates x − y using IEEE 64 bits?conversion of floating point to 8- bit binary wordHow many bits to represent these numbers precisely?

I see my dog run

Manga about a female worker who got dragged into another world together with this high school girl and she was just told she's not needed anymore

A poker game description that does not feel gimmicky

Why do UK politicians seemingly ignore opinion polls on Brexit?

What does it exactly mean if a random variable follows a distribution

Shall I use personal or official e-mail account when registering to external websites for work purpose?

Typesetting a double Over Dot on top of a symbol

Unbreakable Formation vs. Cry of the Carnarium

What is it called when one voice type sings a 'solo'?

Why do we use polarized capacitors?

Is there a way to make member function NOT callable from constructor?

Can the Produce Flame cantrip be used to grapple, or as an unarmed strike, in the right circumstances?

How is it possible for user's password to be changed after storage was encrypted? (on OS X, Android)

Lied on resume at previous job

Doomsday-clock for my fantasy planet

Is there a familial term for apples and pears?

If a centaur druid Wild Shapes into a Giant Elk, do their Charge features stack?

How would photo IDs work for shapeshifters?

Why is my log file so massive? 22gb. I am running log backups

What is the offset in a seaplane's hull?

What do the Banks children have against barley water?

LWC and complex parameters

Is domain driven design an anti-SQL pattern?

What is the command to reset a PC without deleting any files



What is the purpose of regime bits in posit encoding?


How many bits of difference in a relative error?How to interpret fractional number of bits of precisionHow do 24 significant bits give from 6 to 9 significant decimal digits?How do I round this binary number to the nearest even$t$ bits for the fraction of a float $barx$, then the distance between $barx$ and an adjacent float is no more than $2^e−t$Calculate the largets and smallest number fiven exp, bias, and fractConverting a number into IEEE floating point formatWhat is the maximum number of significant bits lost when the computer evaluates x − y using IEEE 64 bits?conversion of floating point to 8- bit binary wordHow many bits to represent these numbers precisely?













0












$begingroup$


Why do we need regime bits in posit?



posit encoding:



enter image description here










share|cite|improve this question











$endgroup$
















    0












    $begingroup$


    Why do we need regime bits in posit?



    posit encoding:



    enter image description here










    share|cite|improve this question











    $endgroup$














      0












      0








      0





      $begingroup$


      Why do we need regime bits in posit?



      posit encoding:



      enter image description here










      share|cite|improve this question











      $endgroup$




      Why do we need regime bits in posit?



      posit encoding:



      enter image description here







      floating-point






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Mar 30 at 4:26









      YuiTo Cheng

      2,3184937




      2,3184937










      asked Mar 30 at 3:54









      kevin998xkevin998x

      1




      1




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$



          Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
            $endgroup$
            – kevin998x
            Mar 30 at 7:50










          • $begingroup$
            I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:38










          • $begingroup$
            you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:39










          • $begingroup$
            Wait, why "have to put a 1 before the usual exponent," ?
            $endgroup$
            – kevin998x
            Mar 31 at 0:57










          • $begingroup$
            We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
            $endgroup$
            – Ross Millikan
            Mar 31 at 0:59











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167915%2fwhat-is-the-purpose-of-regime-bits-in-posit-encoding%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$



          Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
            $endgroup$
            – kevin998x
            Mar 30 at 7:50










          • $begingroup$
            I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:38










          • $begingroup$
            you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:39










          • $begingroup$
            Wait, why "have to put a 1 before the usual exponent," ?
            $endgroup$
            – kevin998x
            Mar 31 at 0:57










          • $begingroup$
            We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
            $endgroup$
            – Ross Millikan
            Mar 31 at 0:59















          0












          $begingroup$

          The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$



          Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
            $endgroup$
            – kevin998x
            Mar 30 at 7:50










          • $begingroup$
            I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:38










          • $begingroup$
            you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:39










          • $begingroup$
            Wait, why "have to put a 1 before the usual exponent," ?
            $endgroup$
            – kevin998x
            Mar 31 at 0:57










          • $begingroup$
            We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
            $endgroup$
            – Ross Millikan
            Mar 31 at 0:59













          0












          0








          0





          $begingroup$

          The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$



          Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.






          share|cite|improve this answer









          $endgroup$



          The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$



          Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 30 at 4:54









          Ross MillikanRoss Millikan

          301k24200375




          301k24200375











          • $begingroup$
            I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
            $endgroup$
            – kevin998x
            Mar 30 at 7:50










          • $begingroup$
            I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:38










          • $begingroup$
            you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:39










          • $begingroup$
            Wait, why "have to put a 1 before the usual exponent," ?
            $endgroup$
            – kevin998x
            Mar 31 at 0:57










          • $begingroup$
            We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
            $endgroup$
            – Ross Millikan
            Mar 31 at 0:59
















          • $begingroup$
            I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
            $endgroup$
            – kevin998x
            Mar 30 at 7:50










          • $begingroup$
            I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:38










          • $begingroup$
            you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
            $endgroup$
            – Ross Millikan
            Mar 30 at 23:39










          • $begingroup$
            Wait, why "have to put a 1 before the usual exponent," ?
            $endgroup$
            – kevin998x
            Mar 31 at 0:57










          • $begingroup$
            We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
            $endgroup$
            – Ross Millikan
            Mar 31 at 0:59















          $begingroup$
          I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
          $endgroup$
          – kevin998x
          Mar 30 at 7:50




          $begingroup$
          I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
          $endgroup$
          – kevin998x
          Mar 30 at 7:50












          $begingroup$
          I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
          $endgroup$
          – Ross Millikan
          Mar 30 at 23:38




          $begingroup$
          I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
          $endgroup$
          – Ross Millikan
          Mar 30 at 23:38












          $begingroup$
          you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
          $endgroup$
          – Ross Millikan
          Mar 30 at 23:39




          $begingroup$
          you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
          $endgroup$
          – Ross Millikan
          Mar 30 at 23:39












          $begingroup$
          Wait, why "have to put a 1 before the usual exponent," ?
          $endgroup$
          – kevin998x
          Mar 31 at 0:57




          $begingroup$
          Wait, why "have to put a 1 before the usual exponent," ?
          $endgroup$
          – kevin998x
          Mar 31 at 0:57












          $begingroup$
          We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
          $endgroup$
          – Ross Millikan
          Mar 31 at 0:59




          $begingroup$
          We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
          $endgroup$
          – Ross Millikan
          Mar 31 at 0:59

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167915%2fwhat-is-the-purpose-of-regime-bits-in-posit-encoding%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Triangular numbers and gcdProving sum of a set is $0 pmod n$ if $n$ is odd, or $fracn2 pmod n$ if $n$ is even?Is greatest common divisor of two numbers really their smallest linear combination?GCD, LCM RelationshipProve a set of nonnegative integers with greatest common divisor 1 and closed under addition has all but finite many nonnegative integers.all pairs of a and b in an equation containing gcdTriangular Numbers Modulo $k$ - Hit All Values?Understanding the Existence and Uniqueness of the GCDGCD and LCM with logical symbolsThe greatest common divisor of two positive integers less than 100 is equal to 3. Their least common multiple is twelve times one of the integers.Suppose that for all integers $x$, $x|a$ and $x|b$ if and only if $x|c$. Then $c = gcd(a,b)$Which is the gcd of 2 numbers which are multiplied and the result is 600000?

          Ingelân Ynhâld Etymology | Geografy | Skiednis | Polityk en bestjoer | Ekonomy | Demografy | Kultuer | Klimaat | Sjoch ek | Keppelings om utens | Boarnen, noaten en referinsjes Navigaasjemenuwww.gov.ukOffisjele webside fan it regear fan it Feriene KeninkrykOffisjele webside fan it Britske FerkearsburoNederlânsktalige ynformaasje fan it Britske FerkearsburoOffisjele webside fan English Heritage, de organisaasje dy't him ynset foar it behâld fan it Ingelske kultuergoedYnwennertallen fan alle Britske stêden út 'e folkstelling fan 2011Notes en References, op dizze sideEngland

          Հադիս Բովանդակություն Անվանում և նշանակություն | Դասակարգում | Աղբյուրներ | Նավարկման ցանկ