What is the purpose of regime bits in posit encoding?How many bits of difference in a relative error?How to interpret fractional number of bits of precisionHow do 24 significant bits give from 6 to 9 significant decimal digits?How do I round this binary number to the nearest even$t$ bits for the fraction of a float $barx$, then the distance between $barx$ and an adjacent float is no more than $2^e−t$Calculate the largets and smallest number fiven exp, bias, and fractConverting a number into IEEE floating point formatWhat is the maximum number of significant bits lost when the computer evaluates x − y using IEEE 64 bits?conversion of floating point to 8- bit binary wordHow many bits to represent these numbers precisely?
I see my dog run
Manga about a female worker who got dragged into another world together with this high school girl and she was just told she's not needed anymore
A poker game description that does not feel gimmicky
Why do UK politicians seemingly ignore opinion polls on Brexit?
What does it exactly mean if a random variable follows a distribution
Shall I use personal or official e-mail account when registering to external websites for work purpose?
Typesetting a double Over Dot on top of a symbol
Unbreakable Formation vs. Cry of the Carnarium
What is it called when one voice type sings a 'solo'?
Why do we use polarized capacitors?
Is there a way to make member function NOT callable from constructor?
Can the Produce Flame cantrip be used to grapple, or as an unarmed strike, in the right circumstances?
How is it possible for user's password to be changed after storage was encrypted? (on OS X, Android)
Lied on resume at previous job
Doomsday-clock for my fantasy planet
Is there a familial term for apples and pears?
If a centaur druid Wild Shapes into a Giant Elk, do their Charge features stack?
How would photo IDs work for shapeshifters?
Why is my log file so massive? 22gb. I am running log backups
What is the offset in a seaplane's hull?
What do the Banks children have against barley water?
LWC and complex parameters
Is domain driven design an anti-SQL pattern?
What is the command to reset a PC without deleting any files
What is the purpose of regime bits in posit encoding?
How many bits of difference in a relative error?How to interpret fractional number of bits of precisionHow do 24 significant bits give from 6 to 9 significant decimal digits?How do I round this binary number to the nearest even$t$ bits for the fraction of a float $barx$, then the distance between $barx$ and an adjacent float is no more than $2^e−t$Calculate the largets and smallest number fiven exp, bias, and fractConverting a number into IEEE floating point formatWhat is the maximum number of significant bits lost when the computer evaluates x − y using IEEE 64 bits?conversion of floating point to 8- bit binary wordHow many bits to represent these numbers precisely?
$begingroup$
Why do we need regime bits in posit?
posit encoding:
floating-point
$endgroup$
add a comment |
$begingroup$
Why do we need regime bits in posit?
posit encoding:
floating-point
$endgroup$
add a comment |
$begingroup$
Why do we need regime bits in posit?
posit encoding:
floating-point
$endgroup$
Why do we need regime bits in posit?
posit encoding:
floating-point
floating-point
edited Mar 30 at 4:26
YuiTo Cheng
2,3184937
2,3184937
asked Mar 30 at 3:54
kevin998xkevin998x
1
1
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$
Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.
$endgroup$
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
|
show 6 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167915%2fwhat-is-the-purpose-of-regime-bits-in-posit-encoding%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$
Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.
$endgroup$
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
|
show 6 more comments
$begingroup$
The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$
Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.
$endgroup$
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
|
show 6 more comments
$begingroup$
The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$
Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.
$endgroup$
The key is the sentence about tapered accuracy. Standard floating point allocates a fixed number of bits to the exponent and the mantissa. Every float in range is represented with the same fractional accuracy. Back when floats were $32$ bits, one standard was one bit for the sign, eight bits for the exponent, and $23$ bits for the mantissa, so the fractional accuracy was about $2^-23approx 10^-7$. For IEEE $64$ bit floats there are $52$ bits in the mantissa, so the fractional accuracy is about $2^-52 approx 2cdot 10^-16$
Tapered accuracy is essentially data compression on the exponent. Exponents near zero are more common than those near the end of the range. You use short bit strings to represent small exponents at the price of longer strings to represent large exponents. That leaves more bits for the mantissa when the exponent is small and fewer when the exponent is large. The posit bits are one implementation of this compression, which the author is claiming can be (almost) as fast as standard floating point. If small exponents can be represented with only six bits instead of eleven, you have five more bits of accuracy in the mantissa when the exponent is in that range.
answered Mar 30 at 4:54
Ross MillikanRoss Millikan
301k24200375
301k24200375
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
|
show 6 more comments
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I still do not understand how regime bits help to obtain different accuracy for small and large exponents respectively... So, number of regime bits depend on the exponent ?
$endgroup$
– kevin998x
Mar 30 at 7:50
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
I didn't follow through how the bits are used. In standard $64$ bit floating point there are$1$ sign bit, $11$ exponent bits, and $52$ mantissa bits. That gives a fractional accuracy of about $2^-52$ throughout the range and a range of exponent of $pm 1023$. We could define the exponent to have $0$ followed by four bits for exponents close to zero. This would handle exponents $pm 7$ but would give $58$ mantissa bits, so the fractional accuracy in this range would be 2^-58$. When you are outside this range, if you want to maintain the overall range
$endgroup$
– Ross Millikan
Mar 30 at 23:38
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
you would have to put a $1$ before the usual exponent, so the accuracy would go down to $2^-51$. Is this a good trade? If most of your numbers are within a factor $2^7$ of $1$ it is good. They are using some more complicated encoding, but the idea will be the same.
$endgroup$
– Ross Millikan
Mar 30 at 23:39
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
Wait, why "have to put a 1 before the usual exponent," ?
$endgroup$
– kevin998x
Mar 31 at 0:57
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
$begingroup$
We need to indicate somehow that the next eleven bits should be taken as the exponent, not just the next four. It is standard in encoding theory that if you make the codes for some messages shorter, others must become longer.
$endgroup$
– Ross Millikan
Mar 31 at 0:59
|
show 6 more comments
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3167915%2fwhat-is-the-purpose-of-regime-bits-in-posit-encoding%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown