View Full Version : 64 bit float
Les Cargill[_4_]
March 5th 17, 04:28 PM
Outside of pathological floating point cases ( most likely for filters
with some sort of feedback ), is there ever a good case for using 64 bit
math internal to DAW elements ( read: plugins ) rather than just 32
bit?
I'm not really seeing one. And for Intel processors, 32 bit has
possible performance advantages if you go with SSE math. SSE stuff
(very) roughly uses a 128 bit "register", in which fits four floats,
but only two doubles. So you get a hypothetical-but-not-really
2x speedup just from that for things like long complex multiplies.
Long vectors, being half the size, will also fit in cache better.
I see a breakdown along the lines of "if the plugin uses an FFT, keep
it in floats. If it's structured more like an IIR/FIR filter which
uses less internal data storage, 64 bit may or may not be any better,
depending on how small sample and internal values can be."
--
Les Cargill
Mike Rivers
March 5th 17, 06:16 PM
On Sunday, 5 March 2017 11:25:09 UTC-5, Les Cargill wrote:
> Outside of pathological floating point cases ( most likely for filters
> with some sort of feedback ), is there ever a good case for using 64 bit
> math internal to DAW elements ( read: plugins ) rather than just 32
> bit?
Frankly, I think 16-bit is enough, particularly given the dynamic range of most plug-in-centric music produced these days, but that's so 1990. As operating systems move to 64-bit, so will plug-ins, so pretty soon, whether you need it or not, that's what you're going to get.
Or are you being specific here about 64-bit floating point (really, really big numbers) or is 64-bit fixed point OK in your book? I'm way behind on this since I almost never use plug-ins and have only one 64-bit Windows system set up because that's what some software that I'm trying to review requires. So to me everything already works fine.
Scott Dorsey
March 5th 17, 06:37 PM
Les Cargill > wrote:
>
>Outside of pathological floating point cases ( most likely for filters
>with some sort of feedback ), is there ever a good case for using 64 bit
>math internal to DAW elements ( read: plugins ) rather than just 32
>bit?
No, but those pathological cases are encountered all the time in routine
dsp operations. If you're doing convolutions on 32 bit float files,
you may benefit a lot from having double precision intermediate variables.
>I'm not really seeing one. And for Intel processors, 32 bit has
>possible performance advantages if you go with SSE math. SSE stuff
>(very) roughly uses a 128 bit "register", in which fits four floats,
>but only two doubles. So you get a hypothetical-but-not-really
>2x speedup just from that for things like long complex multiplies.
>
>Long vectors, being half the size, will also fit in cache better.
Right, and how the compiler takes advantage of that I don't know. If you're
just writing code in your high level language of choice, you don't get much
control over how it gets implemented on the machine. You just have to hope
it's done efficiently.
>I see a breakdown along the lines of "if the plugin uses an FFT, keep
>it in floats. If it's structured more like an IIR/FIR filter which
>uses less internal data storage, 64 bit may or may not be any better,
>depending on how small sample and internal values can be."
It comes down to actually sitting down and doing the numeric analysis on
the function. If you're keeping audio data as 32 bit float values (which
is the representation most DAW software uses as intermediates today), what
do you need to do so that you retain that same precision going in and out
of a function? If your function is just scaling and summing, changing gain
and mixing, it's not likely to be any benefit. If your function is doing
something more complex it might be, but you don't know until you sit down
and do the math.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Neil[_9_]
March 5th 17, 09:41 PM
On 3/5/2017 11:28 AM, Les Cargill wrote:
>
>
> Outside of pathological floating point cases ( most likely for filters
> with some sort of feedback ), is there ever a good case for using 64 bit
> math internal to DAW elements ( read: plugins ) rather than just 32
> bit?
>
> I'm not really seeing one. And for Intel processors, 32 bit has
> possible performance advantages if you go with SSE math. SSE stuff
> (very) roughly uses a 128 bit "register", in which fits four floats,
> but only two doubles. So you get a hypothetical-but-not-really
> 2x speedup just from that for things like long complex multiplies.
>
> Long vectors, being half the size, will also fit in cache better.
>
> I see a breakdown along the lines of "if the plugin uses an FFT, keep
> it in floats. If it's structured more like an IIR/FIR filter which
> uses less internal data storage, 64 bit may or may not be any better,
> depending on how small sample and internal values can be."
>
Greater bit depth for processing is not new. DAWs have been doing this
for decades, and yes, it makes an audible difference in the end results.
For example, CoolEdit Pro used 56 bit processing, and you can readily
hear is that there are fewer artifacts in things like trailing reverbs
and fade-outs than there are when using 32 bit processing. There was
little impact on the CPUs of the day, so I'd expect even less on modern
hardware. Still, I wouldn't care if there was some performance impact
because the end result is far superior.
--
best regards,
Neil
Neil[_9_]
March 6th 17, 02:50 AM
On 3/5/2017 4:41 PM, Neil wrote:
> On 3/5/2017 11:28 AM, Les Cargill wrote:
>>
>>
>> Outside of pathological floating point cases ( most likely for filters
>> with some sort of feedback ), is there ever a good case for using 64 bit
>> math internal to DAW elements ( read: plugins ) rather than just 32
>> bit?
>>
>> I'm not really seeing one. And for Intel processors, 32 bit has
>> possible performance advantages if you go with SSE math. SSE stuff
>> (very) roughly uses a 128 bit "register", in which fits four floats,
>> but only two doubles. So you get a hypothetical-but-not-really
>> 2x speedup just from that for things like long complex multiplies.
>>
>> Long vectors, being half the size, will also fit in cache better.
>>
>> I see a breakdown along the lines of "if the plugin uses an FFT, keep
>> it in floats. If it's structured more like an IIR/FIR filter which
>> uses less internal data storage, 64 bit may or may not be any better,
>> depending on how small sample and internal values can be."
>>
> Greater bit depth for processing is not new. DAWs have been doing this
> for decades, and yes, it makes an audible difference in the end results.
> For example, CoolEdit Pro used 56 bit processing, and you can readily
> hear is that there are fewer artifacts in things like trailing reverbs
> and fade-outs than there are when using 32 bit processing. There was
> little impact on the CPUs of the day, so I'd expect even less on modern
> hardware. Still, I wouldn't care if there was some performance impact
> because the end result is far superior.
>
Two corrections...
1) I meant to write "...and you can readily hear that..."
2) I was referring to floating point processing, which changes the
picture entirely.
So... my bad... never mind!
--
best regards,
Neil
Les Cargill[_4_]
March 7th 17, 12:32 AM
Mike Rivers wrote:
> On Sunday, 5 March 2017 11:25:09 UTC-5, Les Cargill wrote:
>> Outside of pathological floating point cases ( most likely for filters
>> with some sort of feedback ), is there ever a good case for using 64 bit
>> math internal to DAW elements ( read: plugins ) rather than just 32
>> bit?
>
> Frankly, I think 16-bit is enough, particularly given the dynamic
> range of most plug-in-centric music produced these days, but that's
> so 1990. As operating systems move to 64-bit, so will plug-ins, so
> pretty soon, whether you need it or not, that's what you're going to
> get.
>
They're there; I still have a small legion of 32 bit plugs and those
work fine, too. So far, plugin vendors offer 64 and 32 bit versions.
The Waves plugins I have are either-or.
But a 32-bit-interface plugin can be 64 bit internally, and vice versa.
> Or are you being specific here about 64-bit floating point (really, really big numbers) or is 64-bit fixed point OK in your book?
Outside of having to worry a lot about scaling, sure - there are
things that fixed point works better for.
> I'm way behind on this since I almost never use plug-ins
It's really an interesting way to work, IMO. I probably
won't go back.
> and have
> only one 64-bit Windows system set up because that's what some
> software that I'm trying to review requires. So to me everything
> already works fine.
>
>
--
Les Cargill
Les Cargill[_4_]
March 7th 17, 12:52 AM
Scott Dorsey wrote:
> Les Cargill > wrote:
>>
>> Outside of pathological floating point cases ( most likely for filters
>> with some sort of feedback ), is there ever a good case for using 64 bit
>> math internal to DAW elements ( read: plugins ) rather than just 32
>> bit?
>
> No, but those pathological cases are encountered all the time in routine
> dsp operations. If you're doing convolutions on 32 bit float files,
> you may benefit a lot from having double precision intermediate
>variables.
Fortunately, I can switch between 32 and 64 bit for now and trade off
as I build up the instrumentation.
Initial experiments - and this does include convolution - don't
indicate that there are any differences above the LSB for 24 bit
output. Everything is pretty much in or very close to the range
[-1...-1 ] and that's vanishingly close to fixed point in practice.
And I have not even gone to the trouble of any denormal checking
yet.
>
>> I'm not really seeing one. And for Intel processors, 32 bit has
>> possible performance advantages if you go with SSE math. SSE stuff
>> (very) roughly uses a 128 bit "register", in which fits four floats,
>> but only two doubles. So you get a hypothetical-but-not-really
>> 2x speedup just from that for things like long complex multiplies.
>>
>> Long vectors, being half the size, will also fit in cache better.
>
> Right, and how the compiler takes advantage of that I don't know.
> If you're just writing code in your high level language of choice,
> you don't get much control over how it gets implemented on the
> machine. You just have to hope it's done efficiently.
>
Truthfully, the choices for this sort of thing are still assembly/C/C++
or FORTRAN. For audio, it's really just assembly/C/C++ and that means
you have pretty much whatever control you want.
>> I see a breakdown along the lines of "if the plugin uses an FFT, keep
>> it in floats. If it's structured more like an IIR/FIR filter which
>> uses less internal data storage, 64 bit may or may not be any better,
>> depending on how small sample and internal values can be."
>
> It comes down to actually sitting down and doing the numeric
> analysis on the function. If you're keeping audio data as 32 bit
> float values (which is the representation most DAW software uses as
> intermediates today), what do you need to do so that you retain that > same precision going in and out of a function? If your function is
> just scaling and summing, changing gain and mixing, it's not likely
> to be any benefit. If your function is doing something more complex > it might be, but you don't know until you sit down
> and do the math.
Yeah - that's all TBD for now. I do have a couple quick and dirty test
vectors but it'll all have to grow.
> --scott
>
--
Les Cargill
Les Cargill[_4_]
March 7th 17, 12:59 AM
Neil wrote:
> On 3/5/2017 4:41 PM, Neil wrote:
>> On 3/5/2017 11:28 AM, Les Cargill wrote:
>>>
>>>
>>> Outside of pathological floating point cases ( most likely for filters
>>> with some sort of feedback ), is there ever a good case for using 64 bit
>>> math internal to DAW elements ( read: plugins ) rather than just 32
>>> bit?
>>>
>>> I'm not really seeing one. And for Intel processors, 32 bit has
>>> possible performance advantages if you go with SSE math. SSE stuff
>>> (very) roughly uses a 128 bit "register", in which fits four floats,
>>> but only two doubles. So you get a hypothetical-but-not-really
>>> 2x speedup just from that for things like long complex multiplies.
>>>
>>> Long vectors, being half the size, will also fit in cache better.
>>>
>>> I see a breakdown along the lines of "if the plugin uses an FFT, keep
>>> it in floats. If it's structured more like an IIR/FIR filter which
>>> uses less internal data storage, 64 bit may or may not be any better,
>>> depending on how small sample and internal values can be."
>>>
>> Greater bit depth for processing is not new. DAWs have been doing this
>> for decades, and yes, it makes an audible difference in the end results.
>> For example, CoolEdit Pro used 56 bit processing, and you can readily
>> hear is that there are fewer artifacts in things like trailing reverbs
>> and fade-outs than there are when using 32 bit processing. There was
>> little impact on the CPUs of the day, so I'd expect even less on modern
>> hardware. Still, I wouldn't care if there was some performance impact
>> because the end result is far superior.
>>
> Two corrections...
> 1) I meant to write "...and you can readily hear that..."
> 2) I was referring to floating point processing, which changes the
> picture entirely.
>
> So... my bad... never mind!
>
I hate when that happens! :)
No, I'm pretty skeptical that even reverb tails will be much different
in reality, at lest if proper dithering and what not is done. I know
that that sounds like - I had a Nanoverb. :)
The CoolEdit '96/2000 reverbs at least were just awful. :) Gor bless
'em, those reverbs were terrible. I never played with Pro, so maybe it
got better.
FWIW, I've set up some tests using the Waves IR plugin and some of the
big Samplicity impulses, and I can't get much if any difference. It's
the sort of thing that I am never quite sure I've done
correctly though.
--
Les Cargill
Neil[_9_]
March 7th 17, 01:58 PM
On 3/6/2017 7:59 PM, Les Cargill wrote:
> Neil wrote:
>> On 3/5/2017 4:41 PM, Neil wrote:
>>> On 3/5/2017 11:28 AM, Les Cargill wrote:
>>>>
>>>>
>>>> Outside of pathological floating point cases ( most likely for filters
>>>> with some sort of feedback ), is there ever a good case for using 64
>>>> bit
>>>> math internal to DAW elements ( read: plugins ) rather than just 32
>>>> bit?
>>>>
>>>> I'm not really seeing one. And for Intel processors, 32 bit has
>>>> possible performance advantages if you go with SSE math. SSE stuff
>>>> (very) roughly uses a 128 bit "register", in which fits four floats,
>>>> but only two doubles. So you get a hypothetical-but-not-really
>>>> 2x speedup just from that for things like long complex multiplies.
>>>>
>>>> Long vectors, being half the size, will also fit in cache better.
>>>>
>>>> I see a breakdown along the lines of "if the plugin uses an FFT, keep
>>>> it in floats. If it's structured more like an IIR/FIR filter which
>>>> uses less internal data storage, 64 bit may or may not be any better,
>>>> depending on how small sample and internal values can be."
>>>>
>>> Greater bit depth for processing is not new. DAWs have been doing this
>>> for decades, and yes, it makes an audible difference in the end results.
>>> For example, CoolEdit Pro used 56 bit processing, and you can readily
>>> hear is that there are fewer artifacts in things like trailing reverbs
>>> and fade-outs than there are when using 32 bit processing. There was
>>> little impact on the CPUs of the day, so I'd expect even less on modern
>>> hardware. Still, I wouldn't care if there was some performance impact
>>> because the end result is far superior.
>>>
>> Two corrections...
>> 1) I meant to write "...and you can readily hear that..."
>> 2) I was referring to floating point processing, which changes the
>> picture entirely.
>>
>> So... my bad... never mind!
>>
>
>
> I hate when that happens! :)
>
> No, I'm pretty skeptical that even reverb tails will be much different
> in reality, at lest if proper dithering and what not is done. I know
> that that sounds like - I had a Nanoverb. :)
>
> The CoolEdit '96/2000 reverbs at least were just awful. :) Gor bless
> 'em, those reverbs were terrible. I never played with Pro, so maybe it
> got better.
>
> FWIW, I've set up some tests using the Waves IR plugin and some of the
> big Samplicity impulses, and I can't get much if any difference. It's
> the sort of thing that I am never quite sure I've done
> correctly though.
>
In Pro, there were several reverb models, but I imagine that if one was
only using default settings of the simplest model the differences in
reverb tails would be slight. However, there were clearly audible
differences when tweaked, for instance when very long or
multi-directional reverb tails were used. Proper dithering is always a
prerequisite!
--
best regards,
Neil
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.