Discussion:
[music-dsp] Antialias question
Kevin Chi
2018-06-01 02:03:00 UTC
Permalink
Dear List,

Long time lurker here, learned a lot from the random posts, so thanks
for those.

Maybe somebody can help me out what is the best practice for realtime
applications to minimize aliasing when
scanning a waveform by changing speed or constantly modulating the delay
time on a delay (it's like the
resampling rate is changing by every step)?

I guess the smaller the change, the smaller the aliasing effect is, but
what if the change can be fast enough to make it
audible?

If you can suggest a paper or site or keywords that I should look for,
I'd appreciate the help!

Kevin
Sound of L.A. Music and Audio
2018-06-01 08:48:04 UTC
Permalink
Hello Kevin

I am not convinced that your application totally compares to a
continously changed sampling rate, but anyway:

The maths stays the same, so you will have to respect Nyquist and take
the artifacts of your AA filter as well as your signal processing into
account. This means you might use a sampling rate significantly higher
than the highest frequency to be represented correctly and this is the
edge frequency of the stop band of your AA-filter.

For a wave form generator in an industrial device, having similar
demands, we are using something like DSD internally and perform a
continous downsampling / filtering. According to the fully digital
representation no further aliasing occurs. There is only the alias from
the primary sampling process, held low because of the high input rate.

What you can / must do is an internal upsampling, since I expect to
operate with normal 192kHz/24Bit input (?)

Regarding your concerns: It is a difference if you playback the stream
with a multiple of the sampling frequency, especially with the same
frequency, performing modulation mathematically or if you perform a
slight variation of the output frequency, such as with an analog PLL
with modulation taking the values from a FIFO. In the first case, there
is a convolution with the filter behaviour of you processing, in the
second case, there is also a spreading, according to the individual
ratio to the new sampling frequency.

From the view of a musical application, case 2 is preferred, because
any harmonics included in the stream , such as the wave table, can be
preprocess, easier controlled and are a "musical" harmonic. In one of my
synths I operate this way, that all primary frequencies come from a PLL
buffered 2 stage DDS accesssing the wave table with 100% each so there
are no gaps and jumps in the wave table as with classical DDS.

j
Post by Kevin Chi
Dear List,
Long time lurker here, learned a lot from the random posts, so thanks
for those.
Maybe somebody can help me out what is the best practice for realtime
applications to minimize aliasing when
scanning a waveform by changing speed or constantly modulating the delay
time on a delay (it's like the
resampling rate is changing by every step)?
I guess the smaller the change, the smaller the aliasing effect is, but
what if the change can be fast enough to make it
audible?
If you can suggest a paper or site or keywords that I should look for,
I'd appreciate the help!
Kevin
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2018-06-01 19:36:28 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialias question

From: "Sound of L.A. Music and Audio" <***@gmx.de>

Date: Fri, June 1, 2018 4:48 am

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by Sound of L.A. Music and Audio
What you can / must do is an internal upsampling, since I expect to
operate with normal 192kHz/24Bit input (?)
...
Post by Sound of L.A. Music and Audio
j
 
the anonymous "j" is right.  that's better (simpler) than what i suggested.  as long as you know that you'll never be pitching up more than an octave, then whatever your intended output pointer stepsize is, whether it's more than one (but less than two) or
less than one, always cut it in half and generate two output samples per sample period and put those two output samples in another stream that doesn't need to be all that long.  that other stream is at twice your original sample rate and you will be tossing every odd-sample (keeping just the
even samples), but don't do that until *after* you low-pass filter that upsampled stream to half of that stream's Nyquist frequency (which is a nice fixed filter, could be an IIR, maybe 4th-order Butterworth).  after LPFing, throw away every odd sample and output the even samples at your
original sample rate.
--


r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Kevin Chi
2018-06-01 19:04:16 UTC
Permalink
Thanks for your ideas, I'll look into those!

It's actually just a digital delay effect or a sample playback system,
where I have a playhead that have to read samples from a buffer, but the
playhead
position can be modulated, so the output will be pitching up/down
depending on the
actual direction. It's realtime resampling of the original material
where if the playhead is
moving faster than the original sample rate, then the higher frequencies
will be folding back
at Nyquist. So before sampling I should apply an antialias filter to
prevent it, but as the rate of
the playback is always modulated, there is not an exact frequency where
I should apply the
lowpass filter, it's changing constantly.

This is what I meant by comparing to resampling.

--
Kevin
Post by Sound of L.A. Music and Audio
Hello Kevin
I am not convinced that your application totally compares to a
The maths stays the same, so you will have to respect Nyquist and take
the artifacts of your AA filter as well as your signal processing into
account. This means you might use a sampling rate significantly higher
than the highest frequency to be represented correctly and this is the
edge frequency of the stop band of your AA-filter.
For a wave form generator in an industrial device, having similar
demands, we are using something like DSD internally and perform a
continous downsampling / filtering. According to the fully digital
representation no further aliasing occurs. There is only the alias from
the primary sampling process, held low because of the high input rate.
What you can / must do is an internal upsampling, since I expect to
operate with normal 192kHz/24Bit input (?)
Regarding your concerns: It is a difference if you playback the stream
with a multiple of the sampling frequency, especially with the same
frequency, performing modulation mathematically or if you perform a
slight variation of the output frequency, such as with an analog PLL
with modulation taking the values from a FIFO. In the first case, there
is a convolution with the filter behaviour of you processing, in the
second case, there is also a spreading, according to the individual
ratio to the new sampling frequency.
From the view of a musical application, case 2 is preferred, because
any harmonics included in the stream , such as the wave table, can be
preprocess, easier controlled and are a "musical" harmonic. In one of my
synths I operate this way, that all primary frequencies come from a PLL
buffered 2 stage DDS accesssing the wave table with 100% each so there
are no gaps and jumps in the wave table as with classical DDS.
j
Kevin Chi
2018-08-03 21:23:50 UTC
Permalink
Hi,

Is there such a thing as today's standard for softSynth antialiased
oscillators?

I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.

thanks for any ideas and recommendations in advance,
Kevin
robert bristow-johnson
2018-08-03 21:56:59 UTC
Permalink
 
---------------------------- Original Message ----------------------------

Subject: [music-dsp] Antialiased OSC

From: "Kevin Chi" <***@finecutbodies.com>

Date: Fri, August 3, 2018 2:23 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by Kevin Chi
Is there such a thing as today's standard for softSynth antialiased
oscillators?
i think there should be, but if i were to say so, i would sound like a stuck record (and there will be people who disagree).

 
stuck record:  "wavetable ... wavetable ... wavetable ..."
Post by Kevin Chi
I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
if you want, i can send you a C file to show one way it can be done.  Nigel Redmon also has some code online somewhere.
if your sample rate is 48 kHz and you're willing to put in a brickwall LPF at 19 kHz, you can get away
with 2 wavetables per octave, no aliasing, and represent each surviving harmonic (that is below 19 kHz) perfectly.  if your sample rate is 96 kHz, then there is **really** no problem getting the harmonics down accurately (up to 30 kHz) and no aliases.
even though the wavetables can be
*archived* with as few as 128 or 256 samples per wavetable (this can accurately represent the magnitude *and* phase of each harmonic up to the 63rd or 127th harmonic), i very much recommend at Program Change time, when the wavetables that will be used are loaded from the archive to the memory space
where you'll be rockin'-n-rollin', that these wavetables be expanded (using bandlimited interpolation) to 2048 or 4096 samples and then, in the oscillator code, you do linear interpolation in real-time synthesis.  that wavetable expansion at Program Change time will take a few milliseconds (big
fat hairy deal).
lemme know, i'll send you that C file no strings attached.  (it's really quite simple.)  and anyone listening in, i can do the same if you email me.  now this doesn't do the hard part of **defining** the wavetables (the C file is just the oscillator with
morphing).  but we can discuss how to do that here later.

--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
ćwiek
2018-08-03 22:00:36 UTC
Permalink
Can you provide the code with something like pastebin/ Dropbox / gdrive?
I'm also very interested in seeing this implementation.
Thanks,
napent

sob., 4 sie 2018, 00:57 uÅŒytkownik robert bristow-johnson <
Post by robert bristow-johnson
---------------------------- Original Message ----------------------------
Subject: [music-dsp] Antialiased OSC
Date: Fri, August 3, 2018 2:23 pm
--------------------------------------------------------------------------
Post by Kevin Chi
Is there such a thing as today's standard for softSynth antialiased
oscillators?
i think there should be, but if i were to say so, i would sound like a
stuck record (and there will be people who disagree).
stuck record: "wavetable ... wavetable ... wavetable ..."
Post by Kevin Chi
I was looking up PolyBLEP oscillators, and was wondering how it would
relate
Post by Kevin Chi
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
if you want, i can send you a C file to show one way it can be done.
Nigel Redmon also has some code online somewhere.
if your sample rate is 48 kHz and you're willing to put in a brickwall LPF
at 19 kHz, you can get away with 2 wavetables per octave, no aliasing, and
represent each surviving harmonic (that is below 19 kHz) perfectly. if
your sample rate is 96 kHz, then there is **really** no problem getting the
harmonics down accurately (up to 30 kHz) and no aliases.
even though the wavetables can be *archived* with as few as 128 or 256
samples per wavetable (this can accurately represent the magnitude *and*
phase of each harmonic up to the 63rd or 127th harmonic), i very much
recommend at Program Change time, when the wavetables that will be used are
loaded from the archive to the memory space where you'll be
rockin'-n-rollin', that these wavetables be expanded (using bandlimited
interpolation) to 2048 or 4096 samples and then, in the oscillator code,
you do linear interpolation in real-time synthesis. that wavetable
expansion at Program Change time will take a few milliseconds (big fat
hairy deal).
lemme know, i'll send you that C file no strings attached. (it's really
quite simple.) and anyone listening in, i can do the same if you email
me. now this doesn't do the hard part of **defining** the wavetables (the
C file is just the oscillator with morphing). but we can discuss how to do
that here later.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2018-08-03 22:11:05 UTC
Permalink
 
lemme know if this doesn't work: https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
remember that this is *synthesis* code.  it does not extract wavetables from a sampled sound (i wrote a paper a quarter century ago how to do that, but it's not code). 
nor does it define bandlimited square, saw, PWM, hard-sync, whatever.  that's a sorta difficult problem, but one that someone has for sure solved and we can discuss here how to do that (perhaps in MATLAB).  extracting wavetables from sampled notes requires pitch detection/tracking and
interpolation.
L8r,
r b-j


---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: ćwiek <***@gmail.com>

Date: Fri, August 3, 2018 6:00 pm

To: ***@audioimagination.com

music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by ćwiek
Can you provide the code with something like pastebin/ Dropbox / gdrive?
I'm also very interested in seeing this implementation.
Thanks,
napent
sob., 4 sie 2018, 00:57 uÅŒytkownik robert bristow-johnson <
Post by robert bristow-johnson
---------------------------- Original Message ----------------------------
Subject: [music-dsp] Antialiased OSC
Date: Fri, August 3, 2018 2:23 pm
--------------------------------------------------------------------------
Post by Kevin Chi
Is there such a thing as today's standard for softSynth antialiased
oscillators?
i think there should be, but if i were to say so, i would sound like a
stuck record (and there will be people who disagree).
stuck record: "wavetable ... wavetable ... wavetable ..."
Post by Kevin Chi
I was looking up PolyBLEP oscillators, and was wondering how it would
relate
Post by Kevin Chi
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
if you want, i can send you a C file to show one way it can be done.
Nigel Redmon also has some code online somewhere.
if your sample rate is 48 kHz and you're willing to put in a brickwall LPF
at 19 kHz, you can get away with 2 wavetables per octave, no aliasing, and
represent each surviving harmonic (that is below 19 kHz) perfectly. if
your sample rate is 96 kHz, then there is **really** no problem getting the
harmonics down accurately (up to 30 kHz) and no aliases.
even though the wavetables can be *archived* with as few as 128 or 256
samples per wavetable (this can accurately represent the magnitude *and*
phase of each harmonic up to the 63rd or 127th harmonic), i very much
recommend at Program Change time, when the wavetables that will be used are
loaded from the archive to the memory space where you'll be
rockin'-n-rollin', that these wavetables be expanded (using bandlimited
interpolation) to 2048 or 4096 samples and then, in the oscillator code,
you do linear interpolation in real-time synthesis. that wavetable
expansion at Program Change time will take a few milliseconds (big fat
hairy deal).
lemme know, i'll send you that C file no strings attached. (it's really
quite simple.) and anyone listening in, i can do the same if you email
me. now this doesn't do the hard part of **defining** the wavetables (the
C file is just the oscillator with morphing). but we can discuss how to do
that here later.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
 
 
 


--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
ćwiek
2018-08-03 22:16:38 UTC
Permalink
Work's like a charm!
I've implemented wavetable osc in Xaoc Batumi module, that's why it makes
me interested.
Looking forward to do some comparison. Thanks!

sob., 4 sie 2018, 01:11 uÅŒytkownik robert bristow-johnson <
Post by robert bristow-johnson
https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
remember that this is *synthesis* code. it does not extract wavetables
from a sampled sound (i wrote a paper a quarter century ago how to do that,
but it's not code). nor does it define bandlimited square, saw, PWM,
hard-sync, whatever. that's a sorta difficult problem, but one that
someone has for sure solved and we can discuss here how to do that (perhaps
in MATLAB). extracting wavetables from sampled notes requires pitch
detection/tracking and interpolation.
L8r,
r b-j
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Antialiased OSC
Date: Fri, August 3, 2018 6:00 pm
--------------------------------------------------------------------------
Post by ćwiek
Can you provide the code with something like pastebin/ Dropbox / gdrive?
I'm also very interested in seeing this implementation.
Thanks,
napent
sob., 4 sie 2018, 00:57 uÅŒytkownik robert bristow-johnson <
Post by robert bristow-johnson
---------------------------- Original Message
----------------------------
Post by ćwiek
Post by robert bristow-johnson
Subject: [music-dsp] Antialiased OSC
Date: Fri, August 3, 2018 2:23 pm
--------------------------------------------------------------------------
Post by ćwiek
Post by robert bristow-johnson
Post by Kevin Chi
Is there such a thing as today's standard for softSynth antialiased
oscillators?
i think there should be, but if i were to say so, i would sound like a
stuck record (and there will be people who disagree).
stuck record: "wavetable ... wavetable ... wavetable ..."
Post by Kevin Chi
I was looking up PolyBLEP oscillators, and was wondering how it would
relate
Post by Kevin Chi
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
if you want, i can send you a C file to show one way it can be done.
Nigel Redmon also has some code online somewhere.
if your sample rate is 48 kHz and you're willing to put in a brickwall
LPF
Post by ćwiek
Post by robert bristow-johnson
at 19 kHz, you can get away with 2 wavetables per octave, no aliasing,
and
Post by ćwiek
Post by robert bristow-johnson
represent each surviving harmonic (that is below 19 kHz) perfectly. if
your sample rate is 96 kHz, then there is **really** no problem getting
the
Post by ćwiek
Post by robert bristow-johnson
harmonics down accurately (up to 30 kHz) and no aliases.
even though the wavetables can be *archived* with as few as 128 or 256
samples per wavetable (this can accurately represent the magnitude *and*
phase of each harmonic up to the 63rd or 127th harmonic), i very much
recommend at Program Change time, when the wavetables that will be used
are
Post by ćwiek
Post by robert bristow-johnson
loaded from the archive to the memory space where you'll be
rockin'-n-rollin', that these wavetables be expanded (using bandlimited
interpolation) to 2048 or 4096 samples and then, in the oscillator code,
you do linear interpolation in real-time synthesis. that wavetable
expansion at Program Change time will take a few milliseconds (big fat
hairy deal).
lemme know, i'll send you that C file no strings attached. (it's really
quite simple.) and anyone listening in, i can do the same if you email
me. now this doesn't do the hard part of **defining** the wavetables
(the
Post by ćwiek
Post by robert bristow-johnson
C file is just the oscillator with morphing). but we can discuss how to
do
Post by ćwiek
Post by robert bristow-johnson
that here later.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Nigel Redmon
2018-08-04 17:53:28 UTC
Permalink
even though the wavetables can be *archived* with as few as 128 or 256 samples per wavetable (this can accurately represent the magnitude *and* phase of each harmonic up to the 63rd or 127th harmonic), i very much recommend at Program Change time, when the wavetables that will be used are loaded from the archive to the memory space where you'll be rockin'-n-rollin', that these wavetables be expanded (using bandlimited interpolation) to 2048 or 4096 samples and then, in the oscillator code, you do linear interpolation in real-time synthesis. that wavetable expansion at Program Change time will take a few milliseconds (big fat hairy deal).
I know what you’re saying when you say “can be” (for many possible waves, or all waves if you’re willing to accept limitations), but to save possible grief for implementors: First, I’m sure many digital synths use 256 sample tables (and probably very common for synths that let you “draw” or manipulate wave tables), so it’s certainly not wrong. Just realize that 127 harmonics isn’t nearly enough if you expect to play a low sawtooth, filter open, with all the splendor of an analog synth. At 40 Hz, harmonics will top out at 5 kHz. As you play higher or lower notes, you’ll hear the harmonics walk up or down with the notes, as if a filter were tracking. With a full-bandwidth saw, though, the brightness is constant. That takes more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048 or 4096 are good choices (for both noise and harmonics).

I just didn’t want someone writing a bunch of code based on 256-sample tables, only to be disappointed that it doesn’t sound as analog as expected. We like our buzzy sawtooths ;-)
---------------------------- Original Message ----------------------------
Subject: [music-dsp] Antialiased OSC
Date: Fri, August 3, 2018 2:23 pm
--------------------------------------------------------------------------
Post by Kevin Chi
Is there such a thing as today's standard for softSynth antialiased
oscillators?
i think there should be, but if i were to say so, i would sound like a stuck record (and there will be people who disagree).
stuck record: "wavetable ... wavetable ... wavetable ..."
Post by Kevin Chi
I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
if you want, i can send you a C file to show one way it can be done. Nigel Redmon also has some code online somewhere.
if your sample rate is 48 kHz and you're willing to put in a brickwall LPF at 19 kHz, you can get away with 2 wavetables per octave, no aliasing, and represent each surviving harmonic (that is below 19 kHz) perfectly. if your sample rate is 96 kHz, then there is **really** no problem getting the harmonics down accurately (up to 30 kHz) and no aliases.
even though the wavetables can be *archived* with as few as 128 or 256 samples per wavetable (this can accurately represent the magnitude *and* phase of each harmonic up to the 63rd or 127th harmonic), i very much recommend at Program Change time, when the wavetables that will be used are loaded from the archive to the memory space where you'll be rockin'-n-rollin', that these wavetables be expanded (using bandlimited interpolation) to 2048 or 4096 samples and then, in the oscillator code, you do linear interpolation in real-time synthesis. that wavetable expansion at Program Change time will take a few milliseconds (big fat hairy deal).
lemme know, i'll send you that C file no strings attached. (it's really quite simple.) and anyone listening in, i can do the same if you email me. now this doesn't do the hard part of **defining** the wavetables (the C file is just the oscillator with morphing). but we can discuss how to do that here later.
--
"Imagination is more important than knowledge.”
Phil Burk
2018-08-04 20:39:55 UTC
Permalink
Post by Nigel Redmon
With a full-bandwidth saw, though, the brightness is constant. That takes
more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048
or 4096 are good choices (for both noise and harmonics).
As I change frequencies above 86 Hz, I interpolate between wavetables
with 1024 samples. For lower frequencies I interpolate between a bright
wavetable and a pure sawtooth phasor that is not band limited. That way I
can use the same oscillator as an LFO.

https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167

Phil Burk
Nigel Redmon
2018-08-05 17:30:27 UTC
Permalink
Yes, that’s a good way, not only for LFO but for that rare time you want to sweep down into the nether regions to show off. I think a lot fo people don’t consider that the error of a “naive” oscillator becomes increasingly smaller for lower frequencies. Of course, it’s waveform specific, so that’s why I suggested bigger tables. (Side comment: If you get big enough tables, you could choose to skip linear interpolation altogether—at constant table size, the higher frequency octave/whatever tables, where it matters more, will be progressively more oversampled anyway.)

Funny thing I found in writing the wavetable articles. One soft synth developer dismissed the whole idea of wavetables (in favor of minBLEPs, etc.). When I pointed out that wavetables allow any waveform, he said the other methods did too. I questioned that assertion by giving an example of a wavetable with a few arbitrary harmonics. He countered that it wasn’t a waveform. I guess some people only consider the basic synth waves as “waveforms”. :-D

Hard sync is another topic...
Post by Nigel Redmon
With a full-bandwidth saw, though, the brightness is constant. That takes more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048 or 4096 are good choices (for both noise and harmonics).
As I change frequencies above 86 Hz, I interpolate between wavetables with 1024 samples. For lower frequencies I interpolate between a bright wavetable and a pure sawtooth phasor that is not band limited. That way I can use the same oscillator as an LFO.
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167 <https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167>
Phil Burk
robert bristow-johnson
2018-08-05 23:27:31 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Nigel Redmon" <***@earlevel.com>

Date: Sun, August 5, 2018 1:30 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Yes, that&rsquo;s a good way, not only for LFO but for that rare time you want to sweep down into the nether regions to show off.
 
i, personally, would rather see a consistent method used throughout the MIDI keyboard range; high notes or low.  it's hard to gracefully transition from one method to a totally different method while the note sweeps.  like what if portamento is turned on?  the only way to
clicklessly jump from wavetable to a "naive" sawtooth would be to crossfade.  but crossfading to a wavetable richer in harmonics is already built in.  and what if the "classic" waveform wasn't a saw but something else?  more general?
I think a lot of people don&rsquo;t consider that the error of a &ldquo;naive&rdquo; oscillator becomes increasingly smaller for lower frequencies. Of course, it&rsquo;s waveform specific, so that&rsquo;s why I suggested bigger tables. (Side comment: If you get big enough tables, you could
choose to skip linear interpolation altogether&mdash;at constant table size, the higher frequency octave/whatever tables, where it matters more, will be progressively more oversampled anyway.)
well, Duane Wise and i visited this drop-sample vs. linear vs. various different cubic splines
(Lagrange, Hermite...) a couple decades ago.  for really high quality audio (not the same as an electronic musical instrument), i had been able to show that, for 120 dB S/N, 512x oversampling is sufficient for linear interpolation but 512K is what is needed for drop sample.  even relaxing
those standards, choosing to forgo linear interpolation for drop-sample "interpolation" might require bigger wavetables than you might wanna pay for.  for the general wavetable synth (or NCO or DDS or whatever you wanna call this LUT thing, including just sample playback) i would
never recommend interpolation cruder than linear.  Nigel, i remember your code didn't require big tables and you could have each wavetable a different size (i think you had the accumulated phase be a float between 0 and 1 and that was scaled to the wavetable size, right?) but then that might
mean you have to do better interpolation than linear, if you want it clean.
 
Funny thing I found in writing the wavetable articles. One soft synth developer dismissed the whole idea of wavetables (in favor of minBLEPs, etc.). When I pointed out that wavetables allow any
waveform, he said the other methods did too. I questioned that assertion by giving an example of a wavetable with a few arbitrary harmonics. He countered that it wasn&rsquo;t a waveform. I guess some people only consider the basic synth waves as &ldquo;waveforms&rdquo;. :-D
i've had arguments like this with other Kurzweil people while i worked there a decade ago (still such a waste when you consider how good and how much work they put into their sample-playback, looping, and interpolation hardware, only a small modification was needed to make it into a
decent wavetable synth with morphing).
for me, a "waveform" is any quasi-periodic function.  A note from any decently harmonic instrument; piano, fiddle, a plucked guitar, oboe, trumpet, flute, all of those can be done with wavetable synthesis (and most, maybe all, of them can
be limited to 127 harmonics allowing archived wavetables to be as small as 256).
these are the two necessary ingredients to wavetable synthesis:  quasi-periodic note (that means it can be represented as a Fourier series with slowly-changing Fourier coefficients) and bandlimited.  if
it's quasi-periodic and bandlimited it can be done with wavetable synthesis.  to me, for someone to argue against that, means to me that they are arguing against Fourier and Shannon.
there is a straight-forward way of pitch tracking the sampled note from attack to release, and from that
slowly-changing period information, there is a straight-forward way to sample it to 256 points per cycle and converting each adjacent cycle into a wavetable.  that's a lotta redundant data and most of the wavetables (nearly all of them) can be culled with the assumption that the wavetables
surviving the culling process will be linearly cross-faded from one to the next. 
and if several notes (say up and down the keyboard) are sampled, there is a way to align the wavetables (before culling) between the different notes to be phase aligned.  then, say you have a split
every half octave, the note at E-flat can be a mix of the wavetables for C below and F# above.  it's like the F# is pitched down 3 semitones and the C is pitched up 3 semitones and the Eb is a phase-aligned mix of the two.  this can be done with any harmonic or quasi-periodic instrument,
even a piano (but maybe you will need more than 2 splits per octave).
Hard sync is another topic...
hard sync is sorta hard, but still very doable with wavetable (and morphing along one dimension) as long as one is willing to put a lotta memory into it.  each incremental change
in the slave/master frequency ratio (which is a timbre control) will require a separate wavetable to cross-fade into and out.
 
--

r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Nigel Redmon
2018-08-06 17:57:08 UTC
Permalink
Hi Robert,

On the drop-sample issue:

Yes, it was a comment about “what you can get away with”, not about precision. First, it’s frequency dependent (a sample or half is a much bigger relative error for high frequencies), frequency content (harmonics), relative harmonic amplitudes (upper harmonics are usually low amplitude), relative oversampling (at constant table size, higher tables are more oversampled), etc. So, for a sawtooth for instance, the tables really don’t have to be so big before it’s awfully hard to hear the difference between that and linear interpolation. The 512k table analysis is probably similar to looking at digital clipping and figuring out the oversampling ratio needed. But the numbers, the answer would be that you need to upsample to something like a 5 MHz rate (using that number only because I recall reading that conclusion of someone’s analysis once). In reality, you can get away with much less, because when you calculate worst-case expected clipping levels, you tend to forget you’ll have a helluva time hearing the aliased tones amongst all the “correct” harmonic grunge, despite what your dB attenuation calculations tell you. :-)

To be clear, though, I’m not advocating zero-order—linear interpolation is cheap. The code on my website was a compile-time switch, for the intent that the user/student can compare the effects. I think it’s good to think about tradeoffs and why some may work and others not. In the mid ‘90s, I went to work for my old Oberheim mates at Line 6. DSP had been a hobby for years, and at a NAMM Marcus said hey why don’t you come work for us. Marcus was an analog guy, not a DSP guy. But he taught me one important thing, early on: The topic of interpolation came up, or lack thereof, came up. I remember I said something like, “but that’s terrible!”. He replied that every Alesis synth ever made was drop sample. He didn’t have to say more—I immediately realized that probably most samplers ever made were that way (not including E-mu), because samplers typically didn’t need to cater to the general case. You didn’t need to shift pitch very far, because real instruments need to be heavily multisampled anyway. And, of course, musical instruments only need to sound “good” (subjective), not fit an audio specification.

It’s doubtful that people are really going to come down to a performance issue where skipping linear interpolation is the difference in realizing a plugin or device or not. On the old Digidesign TDM systems, I ran into similar tradeoffs often. Where I’d have to do something that on paper seemed to be a horrible thing to do, but to the ear it was fine, and it kept the product viable by staying within the available cycle count.
Nigel, i remember your code didn't require big tables and you could have each wavetable a different size (i think you had the accumulated phase be a float between 0 and 1 and that was scaled to the wavetable size, right?) but then that might mean you have to do better interpolation than linear, if you want it clean.
Similar to above—for native computer code, there’s little point in variable table sizes, mainly a thought exercise. I think somewhere in the articles I also noted that if you really needed to save table space (say in ROM, in a system with limited RAM to expand), it made sense to reduce/track the table sizes only up to a point. I think I gave an audio example of one that tracked octaves with halved table lengths, but up to a minimum of 64 samples. Again, this was mostly a thought exercise, exploring the edge of overt aliasing.

Hope it’s apparent I was just going off on a thought tangent there, some things I think are good to think about for people getting started. Would have been much shorter if just replying to you ;-)

Robert
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Antialiased OSC
Date: Sun, August 5, 2018 1:30 pm
--------------------------------------------------------------------------
Post by Nigel Redmon
Yes, that’s a good way, not only for LFO but for that rare time you want to sweep down into the nether regions to show off.
i, personally, would rather see a consistent method used throughout the MIDI keyboard range; high notes or low. it's hard to gracefully transition from one method to a totally different method while the note sweeps. like what if portamento is turned on? the only way to clicklessly jump from wavetable to a "naive" sawtooth would be to crossfade. but crossfading to a wavetable richer in harmonics is already built in. and what if the "classic" waveform wasn't a saw but something else? more general?
Post by Nigel Redmon
I think a lot of people don’t consider that the error of a “naive” oscillator becomes increasingly smaller for lower frequencies. Of course, it’s waveform specific, so that’s why I suggested bigger tables. (Side comment: If you get big enough tables, you could choose to skip linear interpolation altogether—at constant table size, the higher frequency octave/whatever tables, where it matters more, will be progressively more oversampled anyway.)
well, Duane Wise and i visited this drop-sample vs. linear vs. various different cubic splines (Lagrange, Hermite...) a couple decades ago. for really high quality audio (not the same as an electronic musical instrument), i had been able to show that, for 120 dB S/N, 512x oversampling is sufficient for linear interpolation but 512K is what is needed for drop sample. even relaxing those standards, choosing to forgo linear interpolation for drop-sample "interpolation" might require bigger wavetables than you might wanna pay for. for the general wavetable synth (or NCO or DDS or whatever you wanna call this LUT thing, including just sample playback) i would never recommend interpolation cruder than linear. Nigel, i remember your code didn't require big tables and you could have each wavetable a different size (i think you had the accumulated phase be a float between 0 and 1 and that was scaled to the wavetable size, right?) but then that might mean you have to do better interpolation than linear, if you want it clean.
Post by Nigel Redmon
Funny thing I found in writing the wavetable articles. One soft synth developer dismissed the whole idea of wavetables (in favor of minBLEPs, etc.). When I pointed out that wavetables allow any waveform, he said the other methods did too. I questioned that assertion by giving an example of a wavetable with a few arbitrary harmonics. He countered that it wasn’t a waveform. I guess some people only consider the basic synth waves as “waveforms”. :-D
i've had arguments like this with other Kurzweil people while i worked there a decade ago (still such a waste when you consider how good and how much work they put into their sample-playback, looping, and interpolation hardware, only a small modification was needed to make it into a decent wavetable synth with morphing).
for me, a "waveform" is any quasi-periodic function. A note from any decently harmonic instrument; piano, fiddle, a plucked guitar, oboe, trumpet, flute, all of those can be done with wavetable synthesis (and most, maybe all, of them can be limited to 127 harmonics allowing archived wavetables to be as small as 256).
these are the two necessary ingredients to wavetable synthesis: quasi-periodic note (that means it can be represented as a Fourier series with slowly-changing Fourier coefficients) and bandlimited. if it's quasi-periodic and bandlimited it can be done with wavetable synthesis. to me, for someone to argue against that, means to me that they are arguing against Fourier and Shannon.
there is a straight-forward way of pitch tracking the sampled note from attack to release, and from that slowly-changing period information, there is a straight-forward way to sample it to 256 points per cycle and converting each adjacent cycle into a wavetable. that's a lotta redundant data and most of the wavetables (nearly all of them) can be culled with the assumption that the wavetables surviving the culling process will be linearly cross-faded from one to the next.
and if several notes (say up and down the keyboard) are sampled, there is a way to align the wavetables (before culling) between the different notes to be phase aligned. then, say you have a split every half octave, the note at E-flat can be a mix of the wavetables for C below and F# above. it's like the F# is pitched down 3 semitones and the C is pitched up 3 semitones and the Eb is a phase-aligned mix of the two. this can be done with any harmonic or quasi-periodic instrument, even a piano (but maybe you will need more than 2 splits per octave).
Post by Nigel Redmon
Hard sync is another topic...
hard sync is sorta hard, but still very doable with wavetable (and morphing along one dimension) as long as one is willing to put a lotta memory into it. each incremental change in the slave/master frequency ratio (which is a timbre control) will require a separate wavetable to cross-fade into and out.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Nigel Redmon
2018-08-06 23:14:50 UTC
Permalink
Arg, no more lengthy replies while needing to catch a plane. Of course, I didn’t mean to say Alesis synths (and most others) were drop sample—I meant linear interpolation. The point was that stuff that seems to be substandard can be fine under musical circumstances...

Sent from my iPhone
Post by Ross Bencina
Hi Robert,
Yes, it was a comment about “what you can get away with”, not about precision. First, it’s frequency dependent (a sample or half is a much bigger relative error for high frequencies), frequency content (harmonics), relative harmonic amplitudes (upper harmonics are usually low amplitude), relative oversampling (at constant table size, higher tables are more oversampled), etc. So, for a sawtooth for instance, the tables really don’t have to be so big before it’s awfully hard to hear the difference between that and linear interpolation. The 512k table analysis is probably similar to looking at digital clipping and figuring out the oversampling ratio needed. But the numbers, the answer would be that you need to upsample to something like a 5 MHz rate (using that number only because I recall reading that conclusion of someone’s analysis once). In reality, you can get away with much less, because when you calculate worst-case expected clipping levels, you tend to forget you’ll have a helluva time hearing the aliased tones amongst all the “correct” harmonic grunge, despite what your dB attenuation calculations tell you. :-)
To be clear, though, I’m not advocating zero-order—linear interpolation is cheap. The code on my website was a compile-time switch, for the intent that the user/student can compare the effects. I think it’s good to think about tradeoffs and why some may work and others not. In the mid ‘90s, I went to work for my old Oberheim mates at Line 6. DSP had been a hobby for years, and at a NAMM Marcus said hey why don’t you come work for us. Marcus was an analog guy, not a DSP guy. But he taught me one important thing, early on: The topic of interpolation came up, or lack thereof, came up. I remember I said something like, “but that’s terrible!”. He replied that every Alesis synth ever made was drop sample. He didn’t have to say more—I immediately realized that probably most samplers ever made were that way (not including E-mu), because samplers typically didn’t need to cater to the general case. You didn’t need to shift pitch very far, because real instruments need to be heavily multisampled anyway. And, of course, musical instruments only need to sound “good” (subjective), not fit an audio specification.
It’s doubtful that people are really going to come down to a performance issue where skipping linear interpolation is the difference in realizing a plugin or device or not. On the old Digidesign TDM systems, I ran into similar tradeoffs often. Where I’d have to do something that on paper seemed to be a horrible thing to do, but to the ear it was fine, and it kept the product viable by staying within the available cycle count.
Nigel, i remember your code didn't require big tables and you could have each wavetable a different size (i think you had the accumulated phase be a float between 0 and 1 and that was scaled to the wavetable size, right?) but then that might mean you have to do better interpolation than linear, if you want it clean.
Similar to above—for native computer code, there’s little point in variable table sizes, mainly a thought exercise. I think somewhere in the articles I also noted that if you really needed to save table space (say in ROM, in a system with limited RAM to expand), it made sense to reduce/track the table sizes only up to a point. I think I gave an audio example of one that tracked octaves with halved table lengths, but up to a minimum of 64 samples. Again, this was mostly a thought exercise, exploring the edge of overt aliasing.
Hope it’s apparent I was just going off on a thought tangent there, some things I think are good to think about for people getting started. Would have been much shorter if just replying to you ;-)
Robert
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Antialiased OSC
Date: Sun, August 5, 2018 1:30 pm
--------------------------------------------------------------------------
Post by Nigel Redmon
Yes, that’s a good way, not only for LFO but for that rare time you want to sweep down into the nether regions to show off.
i, personally, would rather see a consistent method used throughout the MIDI keyboard range; high notes or low. it's hard to gracefully transition from one method to a totally different method while the note sweeps. like what if portamento is turned on? the only way to clicklessly jump from wavetable to a "naive" sawtooth would be to crossfade. but crossfading to a wavetable richer in harmonics is already built in. and what if the "classic" waveform wasn't a saw but something else? more general?
Post by Nigel Redmon
I think a lot of people don’t consider that the error of a “naive” oscillator becomes increasingly smaller for lower frequencies. Of course, it’s waveform specific, so that’s why I suggested bigger tables. (Side comment: If you get big enough tables, you could choose to skip linear interpolation altogether—at constant table size, the higher frequency octave/whatever tables, where it matters more, will be progressively more oversampled anyway.)
well, Duane Wise and i visited this drop-sample vs. linear vs. various different cubic splines (Lagrange, Hermite...) a couple decades ago. for really high quality audio (not the same as an electronic musical instrument), i had been able to show that, for 120 dB S/N, 512x oversampling is sufficient for linear interpolation but 512K is what is needed for drop sample. even relaxing those standards, choosing to forgo linear interpolation for drop-sample "interpolation" might require bigger wavetables than you might wanna pay for. for the general wavetable synth (or NCO or DDS or whatever you wanna call this LUT thing, including just sample playback) i would never recommend interpolation cruder than linear. Nigel, i remember your code didn't require big tables and you could have each wavetable a different size (i think you had the accumulated phase be a float between 0 and 1 and that was scaled to the wavetable size, right?) but then that might mean you have to do better interpolation than linear, if you want it clean.
Post by Nigel Redmon
Funny thing I found in writing the wavetable articles. One soft synth developer dismissed the whole idea of wavetables (in favor of minBLEPs, etc.). When I pointed out that wavetables allow any waveform, he said the other methods did too. I questioned that assertion by giving an example of a wavetable with a few arbitrary harmonics. He countered that it wasn’t a waveform. I guess some people only consider the basic synth waves as “waveforms”. :-D
i've had arguments like this with other Kurzweil people while i worked there a decade ago (still such a waste when you consider how good and how much work they put into their sample-playback, looping, and interpolation hardware, only a small modification was needed to make it into a decent wavetable synth with morphing).
for me, a "waveform" is any quasi-periodic function. A note from any decently harmonic instrument; piano, fiddle, a plucked guitar, oboe, trumpet, flute, all of those can be done with wavetable synthesis (and most, maybe all, of them can be limited to 127 harmonics allowing archived wavetables to be as small as 256).
these are the two necessary ingredients to wavetable synthesis: quasi-periodic note (that means it can be represented as a Fourier series with slowly-changing Fourier coefficients) and bandlimited. if it's quasi-periodic and bandlimited it can be done with wavetable synthesis. to me, for someone to argue against that, means to me that they are arguing against Fourier and Shannon.
there is a straight-forward way of pitch tracking the sampled note from attack to release, and from that slowly-changing period information, there is a straight-forward way to sample it to 256 points per cycle and converting each adjacent cycle into a wavetable. that's a lotta redundant data and most of the wavetables (nearly all of them) can be culled with the assumption that the wavetables surviving the culling process will be linearly cross-faded from one to the next.
and if several notes (say up and down the keyboard) are sampled, there is a way to align the wavetables (before culling) between the different notes to be phase aligned. then, say you have a split every half octave, the note at E-flat can be a mix of the wavetables for C below and F# above. it's like the F# is pitched down 3 semitones and the C is pitched up 3 semitones and the Eb is a phase-aligned mix of the two. this can be done with any harmonic or quasi-periodic instrument, even a piano (but maybe you will need more than 2 splits per octave).
Post by Nigel Redmon
Hard sync is another topic...
hard sync is sorta hard, but still very doable with wavetable (and morphing along one dimension) as long as one is willing to put a lotta memory into it. each incremental change in the slave/master frequency ratio (which is a timbre control) will require a separate wavetable to cross-fade into and out.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Ethan Duni
2018-08-06 22:37:35 UTC
Permalink
Post by robert bristow-johnson
i, personally, would rather see a consistent method used throughout the
MIDI keyboard range

If you squint at it hard enough, you can maybe convince yourself that the
naive sawtooth generator is just a memory optimization for low-frequency
wavetable entries. I mean, it does a perfect job at DC right? :]



On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <
Post by robert bristow-johnson
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Antialiased OSC
Date: Sun, August 5, 2018 1:30 pm
--------------------------------------------------------------------------
Post by Nigel Redmon
Yes, that’s a good way, not only for LFO but for that rare time you want
to sweep down into the nether regions to show off.
i, personally, would rather see a consistent method used throughout the
MIDI keyboard range; high notes or low. it's hard to gracefully transition
from one method to a totally different method while the note sweeps. like
what if portamento is turned on? the only way to clicklessly jump from
wavetable to a "naive" sawtooth would be to crossfade. but crossfading to
a wavetable richer in harmonics is already built in. and what if the
"classic" waveform wasn't a saw but something else? more general?
Post by Nigel Redmon
I think a lot of people don’t consider that the error of a “naive”
oscillator becomes increasingly smaller for lower frequencies. Of course,
it’s waveform specific, so that’s why I suggested bigger tables. (Side
comment: If you get big enough tables, you could choose to skip linear
interpolation altogether—at constant table size, the higher frequency
octave/whatever tables, where it matters more, will be progressively more
oversampled anyway.)
well, Duane Wise and i visited this drop-sample vs. linear vs. various
different cubic splines (Lagrange, Hermite...) a couple decades ago. for
really high quality audio (not the same as an electronic musical
instrument), i had been able to show that, for 120 dB S/N, 512x
oversampling is sufficient for linear interpolation but 512K is what is
needed for drop sample. even relaxing those standards, choosing to forgo
linear interpolation for drop-sample "interpolation" might require bigger
wavetables than you might wanna pay for. for the general wavetable synth
(or NCO or DDS or whatever you wanna call this LUT thing, including just
sample playback) i would never recommend interpolation cruder than linear.
Nigel, i remember your code didn't require big tables and you could have
each wavetable a different size (i think you had the accumulated phase be a
float between 0 and 1 and that was scaled to the wavetable size, right?)
but then that might mean you have to do better interpolation than linear,
if you want it clean.
Post by Nigel Redmon
Funny thing I found in writing the wavetable articles. One soft synth
developer dismissed the whole idea of wavetables (in favor of minBLEPs,
etc.). When I pointed out that wavetables allow any waveform, he said the
other methods did too. I questioned that assertion by giving an example of
a wavetable with a few arbitrary harmonics. He countered that it wasn’t a
waveform. I guess some people only consider the basic synth waves as
“waveforms”. :-D
i've had arguments like this with other Kurzweil people while i worked
there a decade ago (still such a waste when you consider how good and how
much work they put into their sample-playback, looping, and interpolation
hardware, only a small modification was needed to make it into a decent
wavetable synth with morphing).
for me, a "waveform" is any quasi-periodic function. A note from any
decently harmonic instrument; piano, fiddle, a plucked guitar, oboe,
trumpet, flute, all of those can be done with wavetable synthesis (and
most, maybe all, of them can be limited to 127 harmonics allowing archived
wavetables to be as small as 256).
quasi-periodic note (that means it can be represented as a Fourier series
with slowly-changing Fourier coefficients) and bandlimited. if it's
quasi-periodic and bandlimited it can be done with wavetable synthesis. to
me, for someone to argue against that, means to me that they are arguing
against Fourier and Shannon.
there is a straight-forward way of pitch tracking the sampled note from
attack to release, and from that slowly-changing period information, there
is a straight-forward way to sample it to 256 points per cycle and
converting each adjacent cycle into a wavetable. that's a lotta redundant
data and most of the wavetables (nearly all of them) can be culled with the
assumption that the wavetables surviving the culling process will be
linearly cross-faded from one to the next.
and if several notes (say up and down the keyboard) are sampled, there is
a way to align the wavetables (before culling) between the different notes
to be phase aligned. then, say you have a split every half octave, the
note at E-flat can be a mix of the wavetables for C below and F# above.
it's like the F# is pitched down 3 semitones and the C is pitched up 3
semitones and the Eb is a phase-aligned mix of the two. this can be done
with any harmonic or quasi-periodic instrument, even a piano (but maybe you
will need more than 2 splits per octave).
Post by Nigel Redmon
Hard sync is another topic...
hard sync is sorta hard, but still very doable with wavetable (and
morphing along one dimension) as long as one is willing to put a lotta
memory into it. each incremental change in the slave/master frequency
ratio (which is a timbre control) will require a separate wavetable to
cross-fade into and out.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Phil Burk
2018-08-07 04:59:54 UTC
Permalink
On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <
***@audioimagination.com> wrote:

i, personally, would rather see a consistent method used throughout the
MIDI keyboard range; high notes or low. it's hard to gracefully transition
from one method to a totally different method while the note sweeps. like
what if portamento is turned on? the only way to clicklessly jump from
wavetable to a "naive" sawtooth would be to crossfade. but crossfading to
a wavetable richer in harmonics is already built in.
Yes. I crossfade between two adjacent wavetables. It is just at the bottom
that I switch to the "naive sawtooth". I want to be able to sweep the
frequency through zero to negative frequency. So I need a signal near zero.
But as I get closer to zero I need an infinite number of octaves. So the
region near zero has to be handled differently anyway.
and what if the "classic" waveform wasn't a saw but something else? more
general?
I only use the MultiTable for the Sawtooth. Then I generate Square and
Pulse from two Sawteeth.
<https://lists.columbia.edu/mailman/listinfo/music-dsp>
Also note that for the octave between Nyquist and Nyquist/2 that I use a
table with a pure sine wave. If I added a harmonic in that range then it
would be above the Nyquist.

Phil Burk
robert bristow-johnson
2018-08-07 05:53:02 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Phil Burk" <***@mobileer.com>

Date: Tue, August 7, 2018 12:59 am

To: "robert bristow-johnson" <***@audioimagination.com>

"A discussion list for music-related DSP" <music-***@music.columbia.edu>

--------------------------------------------------------------------------
Post by Ethan Duni
On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <
i, personally, would rather see a consistent method used throughout the
MIDI keyboard range; high notes or low. it's hard to gracefully transition
from one method to a totally different method while the note sweeps. like
what if portamento is turned on? the only way to clicklessly jump from
wavetable to a "naive" sawtooth would be to crossfade. but crossfading to
a wavetable richer in harmonics is already built in.
Yes. I crossfade between two adjacent wavetables. It is just at the bottom
that I switch to the "naive sawtooth". I want to be able to sweep the
frequency through zero to negative frequency.
okay, i get it.  if DC wasn't the bottom, but some octave on the keyboard, it would be a saw with sample values very close to the "naive sawtooth" ramp but could still have some limit to harmonics.
Post by Ethan Duni
So I need a signal
near zero.
Post by Ethan Duni
But as I get closer to zero I need an infinite number of octaves.
yup.
Post by Ethan Duni
So the region near zero has to be handled differently anyway.
and what if the "classic" waveform wasn't a saw but something else? more
general?
I only use the MultiTable for the Sawtooth. Then I generate Square and
Pulse from two Sawteeth.
yup. that works.  detune them slightly and it sounds like a monster analog synth.
Post by Ethan Duni
<https://lists.columbia.edu/mailman/listinfo/music-dsp>
Also note that for the octave between Nyquist and Nyquist/2 that I use a
table with a pure sine wave. If I added a harmonic in that range then it
would be above the Nyquist.
that i would expect.  pretty high octave.
the octave below that would have a sine and it's octave up.  the octave below that would have the sine (at the fundamental) and three harmonics above it...
 
--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Sampo Syreeni
2018-11-01 01:35:17 UTC
Permalink
Post by Phil Burk
I crossfade between two adjacent wavetables.
Yes. Now the question is, how to fade between them, optimally.

I once again don't have any math to back this up, but intuition says the
mixing function ought to be something like a sinc function or a raised
cosine, at the lower rate. Because off the inherent bandlimit. And then
the ability of such linear phase thingies to be turned into one-off
interpolation thingies.

Doing it at the lower rate, for the lower wavetable, would seem to be
the easiest, while holding to band limitation.
--
Sampo Syreeni, aka decoy - ***@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
robert bristow-johnson
2018-11-01 19:17:49 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Sampo Syreeni" <***@iki.fi>

Date: Wed, October 31, 2018 9:35 pm

To: ***@mobileer.com

"A discussion list for music-related DSP" <music-***@music.columbia.edu>

Cc: "robert bristow-johnson" <***@audioimagination.com>

--------------------------------------------------------------------------
Post by Sampo Syreeni
Post by Phil Burk
I crossfade between two adjacent wavetables.
Yes. Now the question is, how to fade between them, optimally.
I once again don't have any math to back this up, but intuition says the
mixing function ought to be something like a sinc function or a raised
cosine, at the lower rate. Because off the inherent bandlimit. And then
the ability of such linear phase thingies to be turned into one-off
interpolation thingies.
Doing it at the lower rate, for the lower wavetable, would seem to be
the easiest, while holding to band limitation.
interpolating between samples of a wavetable and crossfading between wavetables are different issues.
if this wavetable synthesis is for the purpose of synthesizing a bandlimited saw, square, triangle, PWM, sync saw, sync square, then
you adjacent wavetables going up and down the keyboard should be identical except on will have more harmonics at the top set to zero.
i think a linear crossfade, mixing only the two adjacent wavetables, is the correct way to do it.

--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Ethan Duni
2018-11-01 21:22:15 UTC
Permalink
Well you definitely want a monotonic, equal-amplitude crossfade, and
probably also time symmetry. So I think raised sinc is right out.

In terms of finer design considerations it depends on the time scale. For
longer crossfades (>100ms), steady-state considerations apply, and you can
design for frequency domain characteristics. I.e., raised cosine, half of
your favorite analysis window, etc.

But for shorter crossfades (particularly 20ms and below), time domain
considerations dominate and you want to minimize the max slope of the
crossfade curve. So a linear crossfade is indicated here.

Of course linear crossfade is also the cheapest option, so you really need
a reason *not* to use it.

Ethan (D)

On Thu, Nov 1, 2018 at 12:18 PM robert bristow-johnson <
Post by robert bristow-johnson
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Antialiased OSC
Date: Wed, October 31, 2018 9:35 pm
--------------------------------------------------------------------------
Post by Sampo Syreeni
Post by Phil Burk
I crossfade between two adjacent wavetables.
Yes. Now the question is, how to fade between them, optimally.
I once again don't have any math to back this up, but intuition says the
mixing function ought to be something like a sinc function or a raised
cosine, at the lower rate. Because off the inherent bandlimit. And then
the ability of such linear phase thingies to be turned into one-off
interpolation thingies.
Doing it at the lower rate, for the lower wavetable, would seem to be
the easiest, while holding to band limitation.
interpolating between samples of a wavetable and crossfading between
wavetables are different issues.
if this wavetable synthesis is for the purpose of synthesizing a
bandlimited saw, square, triangle, PWM, sync saw, sync square, then you
adjacent wavetables going up and down the keyboard should be identical
except on will have more harmonics at the top set to zero.
i think a linear crossfade, mixing only the two adjacent wavetables, is
the correct way to do it.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Phil Burk
2018-08-03 23:28:05 UTC
Permalink
Hello Kevin,

There are some antialiased oscillators in JSyn (Java) that might interest
you.

Harmonic table approach
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorBL.java

Square generated from Sawtooth
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SquareOscillatorBL.java

Differentiated Parabolic Waveform - Simpler, faster and almost as clean.
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorDPW.java

Phil Burk
Post by Kevin Chi
Hi,
Is there such a thing as today's standard for softSynth antialiased
oscillators?
I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
Kevin
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Ross Bencina
2018-08-04 06:12:24 UTC
Permalink
Hi Kevin,

Wavetables are for synthesizing ANY band-limited *periodic* signal. On
the other hand, the BLEP family of methods are for synthesizing
band-limited *discontinuities* (first order, and/or higher order
discontinuities).

It is true that BLEP can be used to synthesize SOME bandlimited periodic
signals (typically those involving a few low-order discontinuities per
cycle such as sqare, saw and triangle waveforms). In this case, BLEP can
be thought of as adding "corrective grains" that cancel out the aliasing
present in a naive (aliased) waveform that has sample-aligned
discontinuities. The various __BL__ methods tend to vary in how they
synthesize the corrective grains.

If all you need to do is synthesize bandlimited periodic signals, I
don't see many benefits to BLEP methods over wavetable synthesis.

Where BLEP comes into its own (in theory at least) is when the signal
contains discontinuities that are not synchronous with the fundamental
period. The obvious application is hard-sync, where discontinuities vary
relative to the phase of the primary waveform.

The original BLEP method used wavetables for the corrective grains, so
has no obvious performance benefit over periodic wavetable synthesis.
Other derivative methods (maybe polyBLEP?) don't use wavetables for the
corrective grains, so they might potentially have benefits in settings
where there is limited RAM or where the cost of memory access and/or
cache pollution is large (e.g. modern memory hierarchies) -- but you'd
need to measure!

You mention frequency modulation. A couple of thoughts on that:

(1) With wavetable switching, frequency modulation will cause high
frequency harmonics to fade in and out as the wavetables are crossfaded
-- a kind of amplitude modulation on the high-order harmonics. The
strength of the effect will depend on the amount of high-frequency
content in the waveform, and the number of wavetables (per octave, say):
Less wavetables per-octave will cause lower frequency harmonics to be
affected, more wavetables per-octave will lessen the effect on low-order
harmonics, but will cause the rate of amplitude modulation to increase.
To some extent you can push this AM effect to higher frequencies by
allowing some aliasing (say above 18kHz). You could eliminate the AM
effect entirely with 2x oversampling.

(2) With BLEP-type methods, full alias suppression is dependent on
generating corrective bandlimited pulses for all non-zero higher-order
derivatives of the signal. Unless your original signal is a
square/rectangle waveform, any frequency modulation will introduce
additional derivative terms (product rule) that need to be compensated.
For sufficiently low frequency, low amplitude modulation you may be able
to ignore these terms, but beyond some threshold they will become
significant and would need to be dealt with. I don't recall how PolyBLEP
deals with higher-order corrective terms.

In any case, my main point here is that BLEP methods don't magically
support artifact-free frequency modulation (except maybe for square waves).

In the end I don't think there's one single standard, because there are
mutually exclusive trade-offs to be made. The design space includes:

Features:

- Supported frequency modulation? (just pitch bend? low frequency LFOs?
audio-rate modulation?).

- Support for hard sync?

- Support for arbitrary waveforms?

Audio quality:

- Allowable aliasing specification (e.g. below 120dB over whole audio
spectrum, or below 70dB below 10kHz, etc.)

- High-frequency harmonic modulation margin under FM?

Compute performance:

- RAM usage

- CPU usage per voice

And of course:

- Development time/cost (initial and cost of adding features)


Today, desktop CPUs are fast enough to support the most
difficult-to-achieve synthesis capabilities with no measurable audio
artifacts. There are plugins that aim for that, and use a whole CPU core
to synthesize a single voice. There is a market for that. But if you
goal is a 128-voice polysynth that uses a maximum of 2% CPU on a
smartphone then you may not want to aim for say, completely-alias-free
hard-sync of audio-rate frequency modulated arbitrary waveforms.

Cheers,

Ross.
Post by Kevin Chi
Hi,
Is there such a thing as today's standard for softSynth antialiased
oscillators?
I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other
algos.
thanks for any ideas and recommendations in advance,
Kevin
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2018-08-06 05:14:39 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Ross Bencina" <rossb-***@audiomulch.com>

Date: Sat, August 4, 2018 2:12 am

To: "A discussion list for music-related DSP" <music-***@music.columbia.edu>

--------------------------------------------------------------------------
Post by Ross Bencina
Hi Kevin,
Wavetables are for synthesizing ANY band-limited *periodic* signal.
...
Post by Ross Bencina
If all you need to do is synthesize bandlimited periodic signals, I
don't see many benefits to BLEP methods over wavetable synthesis.
i might suggest prepending "quasi-" to "periodic".  notes don't have to be perfectly periodic, just *mostly* periodic.  one cycle in this quasi-periodic waveform should look just like its
adjacent cycles on the left or right.  but it wouldn't necessarily look like a cycle 1 second away.
Post by Ross Bencina
(1) With wavetable switching, frequency modulation will cause high
frequency harmonics to fade in and out as the wavetables are crossfaded
-- a kind of amplitude modulation on the high-order harmonics. The
strength of the effect will depend on the amount of high-frequency
Less wavetables per-octave will cause lower frequency harmonics to be
affected, more wavetables per-octave will lessen the effect on low-order
harmonics, but will cause the rate of amplitude modulation to increase.
To some extent you can push this AM effect to higher frequencies by
allowing some aliasing (say above 18kHz). You could eliminate the AM
effect entirely with 2x oversampling.
two things:  1.  the AM effect should *not* affect the lower harmonics that are not potential aliases.  as you cross-fade, those harmonics are identical in amplitude and phase in the beginning and ending wavetables.  it's only the
highest harmonics that start to fade out (before they alias) as the pitch increases.
2. it's possible to show that if you have a sample rate of 48 kHz (so 24 kHz is the "folding frequency") and two wavetables per octave you can make all this nasty harmonic aliasing (and variation in
amplitude) happen above 19.88 kHz.
at the bottom of the half-octave split, the fundamental is f0.  the Nth harmonic is right at 19.88 kHz so N = (19.88 kHz)/f0 and there are no other harmonics above it.  at the top of the split the fundamental is 2^(1/2)*f0 and the Nth harmonic is
at 2^(1/2)*(19.88 kHz) = 28.11 kHz, if there was no fold over.  but since there is, this top harmonic folds over to 48 kHz - 28.11 kHz = 19.89 kHz.  admittedly that's a little tight to start fading it out and fading in the waveform that has fundamental at 2^(1/2)*f0 and top
harmonic of 19.88 kHz, but you can back off a little and maybe just do this all above 19 kHz and put a brickwall LPF at 19 kHz.  i know i (at age 62) won't be missing any harmonics above 19 kHz.  below 19 kHz, every harmonic is harmonic (unaliased) and unchanging in
amplitude.
another thing you can do is make your splits be smaller than 6 semitone spacing.
and, of course, if the sample rate is 96 kHz, no one will be hearing any aliasing nor loss of harmonics below even 24 kHz.  but at 96 kHz, maybe you can get away with "naive"
sawtooths and square and maybe even naive hard-sync.  maybe not.
 
--


r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Nigel Redmon
2018-08-04 16:57:57 UTC
Permalink
Robert mentioned I have some code (and more importantly, I think, discussion):

http://www.earlevel.com/main/category/digital-audio/oscillators/wavetable-oscillators/?order=ASC <http://www.earlevel.com/main/category/digital-audio/oscillators/wavetable-oscillators/?order=ASC>
Hi,
Is there such a thing as today's standard for softSynth antialiased oscillators?
I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other algos.
thanks for any ideas and recommendations in advance,
Kevin
Theo Verelst
2018-08-22 15:00:29 UTC
Permalink
Hi,
Is there such a thing as today's standard for softSynth antialiased oscillators?
It's easier to know that for Open Source products, which use a variation of methods as
far I checked, and nothing out there I've heard (probably some stuff of most known
packages) is "standard worthy" in terms of deep understanding of the underlying
theory, or even made with quantitative error bounds in mind. Parts of some hardware based
oscillator designs probably excepted.

There's a lot of ways to look at wave tables and their use, for instance the way I used
one quite some years ago in a for the time advanced enough Open Source hardware
analog synthesizer simulation, mostly dealing with signal aliasing/reconstruction errors
(don't confuse the two) and the possibility to store accurate waves forms that are hard
to compute in real time.

The dealings with the harmonic distortion from interpolating in the wave table, and it's
subsequent effects on the rest of the signal chain are often done in such a way as to
take a few parts of the relevant theory, ignoring the others, leading to a rather
dead an bland sound in many commercial products, simply because preparing the samples
for DA conversion is hard to do, and even relatively simple interpolations cost work,
and it isn't easy to make a scheme based on WTs that overall gives guaranteed good
results, including pitch bends, fast and slow modulations and some sort of consistent
sound feel.

It's possible to make a real design, which tries to eliminate serious distortion, and
work for a number of musical applications with a lot better sound, in terms of
high fidelity, but thus far that's thus far above the reach of the people in this group,
to my knowledge.


T.V.
Vladimir Pantelic
2018-08-22 17:02:26 UTC
Permalink
Post by Theo Verelst
There's a lot of ways to look at wave tables and their use, for instance the way
I used one quite some years ago in a for the time advanced enough Open Source
hardware analog synthesizer simulation
can you name it?
Theo Verelst
2018-09-01 14:30:31 UTC
Permalink
Content-wise, you also need to consider what the meaning is of
the terminology "anti aliased"; technically, it means you prevent
the double interpretation of wave partials, which is usually associated
with AD or DA conversion, as in that high frequency components that are
put on a AD converter will be "aliased" because they will mirror around
the Nyquist rate and become indistinguishable from a similar frequency in the
input signal.

A more common thing to think about in softwaere waveform synthesis, apart
from this principle but then in the "virtual" sampling of a waveform,, is
to consider the error an actual DAC (digital to analog converter, like your
sound card has), as compared with a "perfect reconstructor", which would take
your properly bandwidth limited signal (to prevent aliasing) and (given a very
long latency) turn it into a perfect output signal from you sound card.

The DAC in you soundcard will not do this job perfectly, even if you're
perfectly anti-aliasing or bandwidth limiting your digital signal. That's because
of the sampling reconstruction theorem's need for a very long filter, while
and actual DAC has a very short reconstruction filter.

One of the effects of this limitation is probably the most important to consider
for musical instruments producing sound which will be amplified into the higher Decibel
domains: mid frequency blare. Especially in highly resonant spaces, like those with
un-damped parallel reflective walls, certain sound wave patterns tend to amplify through
reverberation causing a lot of clutter in the sensitive range of the human hearing, the
middle frequencies (lets say 1000 through 4000 Hz). This "blare" becomes louder because
of various digital processing and DAC reconstruction ensemble effects, and preferably
should be controlled.

So especially for serious "live" music reproduction, measures ought to be in place to
control the amount (and kind) of blare your software isntrument produces, probably as
higher priority than the exact type and amount of "anti-aliasing" you provide.

Theo V.
Kevin Chi
2018-08-03 22:44:28 UTC
Permalink
Thank you for the quick ideas, the code looks nice.

Currently I am not designing a wavetable OSC, I am just trying to do
some basic VA waveforms (saw, tri, square) so naively I thought
I just set up the 15-20 tables/waveform  from fourier series whenever I
start the app/plugin and that could do the job. Then interpolating
between the samples of the closest wavetables. But maybe I am too naive? :)

With wavetables I was also wondering how one can act when there is a
pitch modulation of a running note that will go
up/down an octave... as my understanding this would mean if I don't
switch wavetables at a certain pitch mod, then it will
introduce more and more aliasing. But checking this at every sample
sounds like some overhead maybe shouldn't be there.

--
Kevin @ Fine Cut Bodies
Mailto: ***@finecutbodies.com
Web: http://finecutbodies.com
LinkedIn: http://lnkd.in/XZiSjF
lemme know if this doesn't work:?https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
remember that this is*synthesis* code.? it does not extract wavetables from a sampled sound (i wrote a paper a quarter century ago how to do that, but it's not code).?
nor does it define bandlimited square, saw, PWM, hard-sync, whatever.? that's a sorta difficult problem, but one that someone has for sure solved and we can discuss here how to do that (perhaps in MATLAB).? extracting wavetables from sampled notes requires pitch detection/tracking and
interpolation.
robert bristow-johnson
2018-08-03 22:50:53 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Kevin Chi" <***@finecutbodies.com>

Date: Sat, August 4, 2018 2:44 am

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by Kevin Chi
Thank you for the quick ideas, the code looks nice.
Currently I am not designing a wavetable OSC, I am just trying to do
some basic VA waveforms (saw, tri, square) so naively I thought
I just set up the 15-20 tables/waveform  from fourier series whenever I
start the app/plugin and that could do the job. Then interpolating
between the samples of the closest wavetables. But maybe I am too naive? :)
With wavetables I was also wondering how one can act when there is a
pitch modulation of a running note that will go
up/down an octave... as my understanding this would mean if I don't
switch wavetables at a certain pitch mod, then it will
introduce more and more aliasing. But checking this at every sample
sounds like some overhead maybe shouldn't be there.
 
as the pitch modulates, let's say the pitch is increasing, you *crossfade* from a wavetable with more harmonics to a wavetable that is identical except with fewer harmonics.
as the pitch is decreasing you crossfade from a wavetable with fewer harmonics to one with more
harmonics.

--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
robert bristow-johnson
2018-08-04 22:17:59 UTC
Permalink
I am not sure what a "pure sawtooth phasor" is.  Do you mean a "naive sawtooth" a.k.a. a ramp function?
The technique that I have suggested is, say, 4096 samples for all active wavetables so that alignment and crossfading are simpler.  In a software synthetic that runs on a modern computer, the waste of memory does not seem to be salient.  4096 × 4 × 64 = 1 meg.  Thats 64 wavetables for some instrument.
Of course the wavetables for higher pitches will have fewer harmonics.

--r b-j                     ***@audioimagination.com
"Imagination is more important than knowledge."




-------- Original message --------
From: Phil Burk <***@mobileer.com>
Date: 8/4/2018 1:39 PM (GMT-08:00)
To: A discussion list for music-related DSP <music-***@music.columbia.edu>
Subject: Re: [music-dsp] Antialiased OSC

On Sat, Aug 4, 2018 at 10:53 AM, Nigel Redmon <***@earlevel.com> wrote: With a full-bandwidth saw, though, the brightness is constant. That takes more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048 or 4096 are good choices (for both noise and harmonics). As I change frequencies  above 86 Hz, I interpolate between wavetables with 1024 samples. For lower frequencies I interpolate between a bright wavetable and a pure sawtooth phasor that is not band limited. That way I can use the same oscillator as an LFO.
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167

Phil Burk
Ross Bencina
2018-08-05 04:17:25 UTC
Permalink
Hi Robert,
Post by robert bristow-johnson
In a software
synthetic that runs on a modern computer, the waste of memory does not
seem to be salient.  4096 × 4 × 64 = 1 meg.  Thats 64 wavetables for
some instrument.
The salient metric is amortized number of L1/L2/L3 cache misses per
sample lookup.

Ross.
robert bristow-johnson
2018-08-05 05:19:25 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Ross Bencina" <rossb-***@audiomulch.com>

Date: Sun, August 5, 2018 12:17 am

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by Ross Bencina
Hi Robert,
Post by robert bristow-johnson
In a software
synthetic that runs on a modern computer, the waste of memory does not
seem to be salient.  4096 × 4 × 64 = 1 meg.  Thats 64 wavetables for
some instrument.
i meant to say "In a software synth" and the android spell-fascist insisted on "synthetic".
Post by Ross Bencina
The salient metric is amortized number of L1/L2/L3 cache misses per
sample lookup.
that's a good point.  it's sorta one reason that the code that i posted in dropbox processes samples by blocks of samples.  but since the wavetables are so large (so that we can get away with linear interpolation between samples), the pointer is striding through the wavetable in
big steps.  and then there is the morphing with other wavetables that are stored somewhere else.
caching is complicated.  besides morphing/mixing with other wavetables in up to, say, 3 dimensions of control, those other wavetables can be anywhere.  i think it would be pretty
hard to avoid L1 cache misses (each wavetable is 16K).
i suppose the rows and columns of the wavetables can be switched so that samples having the same index number for each wavetable are stored interlaced and adjacent to each other, then mixing between the wavetables (for a single output
sample) would be less of a problem.  but they will still stride through the beginning to the end of the wavetable in just a few samples. 
i haven't thought about cache misses.  seems to me the only thing that can maybe help is to reorder ("reshape" in MATLAB) the array
of wavetables; grouping together samples of the same sample index in the waveform.  in 3-dimensional mixing, the code that i posted would mix samples from 8 different wavetables out of a larger constellation.  and, in the simple-minded "all wavetables the same size" in my
approach, (i think Nigel avoids this problem in his code), we know that in one output sample computation, the index is the same for all of the wavetables.  i just dunno what else can be done to reduce L1 cache misses.  looks like L3 cache miss is not a
problem.
 
 

--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Scott Gravenhorst
2018-08-07 00:06:02 UTC
Permalink
Post by Nigel Redmon
Arg, no more lengthy replies while needing to catch a plane. Of
course, I didnt mean to say Alesis synths (and most others) were
drop sampleI meant linear interpolation. The point was that stuff
that seems to be substandard can be fine under musical
circumstances...
This is an important point. I see so often developer approaches that start
at the most complex and most computationally burdensome when simpler
approaches work well enough to pass the "good enough" test. I start at the
low end and if I can hear stuff I don't like, then I'll try more "advanced"
techniques and that has served me very well.

You may now start heaving rotten eggs and expired vegetables at me :)

-- ScottG
________________________________________________________________________
-- Scott Gravenhorst
-- http://scott.joviansynth.com/
-- When the going gets tough, the tough use the command line.
-- Matt 21:22
Andrew Simper
2018-08-07 01:17:09 UTC
Permalink
I definitely agree here, start with the easy approach, then put in more
effort when it's needed - but keep in mind you won't be able to get decent
feedback from non-dsp people until the final quality version is done.

If the code is not a key part of your product then you can even take
another step back, if you can find someone else's code that does what you
want then just use that! If that code has detailed information on how it
was derived then if you do need to make a change in the future you can put
in the work later on to understand how it was done and make changes as
needed.

I really appreciate RBJ's Audio EQ cookbook for this sort of approach, and
hopefully I have helped people with the technical paper's I've done too for
the non LTI (modulation) case of linear 2 pole resonant filters / EQ.

Cheers,

Andy
Post by Scott Gravenhorst
Post by Nigel Redmon
Arg, no more lengthy replies while needing to catch a plane. Of
course, I didnt mean to say Alesis synths (and most others) were
drop sampleI meant linear interpolation. The point was that stuff
that seems to be substandard can be fine under musical
circumstances...
This is an important point. I see so often developer approaches that start
at the most complex and most computationally burdensome when simpler
approaches work well enough to pass the "good enough" test. I start at the
low end and if I can hear stuff I don't like, then I'll try more "advanced"
techniques and that has served me very well.
You may now start heaving rotten eggs and expired vegetables at me :)
-- ScottG
________________________________________________________________________
-- Scott Gravenhorst
-- http://scott.joviansynth.com/
-- When the going gets tough, the tough use the command line.
-- Matt 21:22
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2018-08-07 01:54:35 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Antialiased OSC

From: "Scott Gravenhorst" <***@gte.net>

Date: Mon, August 6, 2018 8:06 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by Nigel Redmon
Arg, no more lengthy replies while needing to catch a plane. Of
course, I didnt mean to say Alesis synths (and most others) were
drop sampleI meant linear interpolation. The point was that stuff
that seems to be substandard can be fine under musical
circumstances...
This is an important point. I see so often developer approaches that start
at the most complex and most computationally burdensome when simpler
approaches work well enough to pass the "good enough" test. I start at the
low end and if I can hear stuff I don't like, then I'll try more "advanced"
techniques and that has served me very well.
well, we might have different definitions about what "complex and computationally burdensome" is.  i don't always measure it out as what's the least MIPS.  sometimes it's a computational burden to toss in a conditional execution instruction.  sometimes is less
computational burden to just extend the model to use more resources, like more wavetables just after NoteOn to cover the attack.  i just think that as long as 4K word wavetables can cover 10+ octaves (but there's a lotta redundancy in those wavetables for the high pitches).  the easiest
way to do it simply is just do it the same old way, but throw enough wavetable length at it.
again, this is a softsynth running on a platform with ca. gig or memory.  the cost is about 16K per wavetable.  maybe an instrument needs 64 wavetables laid out in some array to cover
modulation in all of the dimensions.  that's a meg for one instrument.
i'll admit it's an aesthetic issue.  even though i do it too, i dislike if() statements in the real-time processing code.  it's just that's how i would extend the range of an instrument rather than grafting
this wavetable biz to the "naive" method (that might produce some aliases at a low level).  how do you graft them together clicklessly or glitchlessly?  it just seems to me that doing it the same consistent way makes it the least complex and computationally
burdensome.
--


r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Kevin Chi
2018-08-07 22:02:46 UTC
Permalink
I just want to thank you guys for the amount of experience and knowledge
you are sharing here! This list is a gem!

I started to replace my polyBLEP oscillators with waveTables to see how
it compares!


Although while experimenting with PolyBLEP I just run into something I
don't get and probably you will know the answer for this.

I read at a couple of places if you use a leaky integrator on a Square
then you can get a Triangle. But as a leaky integrator
is a first order lowpass filter, you won't get a Triangle waveform, but
this:

Loading Image...

Is it me doing something wrong misunderstanding the concept, or what is
the best way to make a triangle with PolyBLEPs?


thanks again for the great discussion on wavetables!

--
Kevin @ Fine Cut Bodies
Andrew Simper
2018-08-23 01:35:17 UTC
Permalink
To bandlimit a discontinuity in the n-th derivative of any function you add
a corrective grain formed from band-limited step approximation integrated n
times. For saw and sqr, which have C0 discontinuites, you add in
band-limited corrective step functions directly. For a non-synced triangle,
where you only have C1 discontinuities you add in band-limited corrective
ramp functions (integrated step). A corrective function is just the
band-limited one with the trivial one subtracted, so you can generate the
trivial waveform and then add the correction to get the band-limited one.

Andy
Post by Kevin Chi
I just want to thank you guys for the amount of experience and knowledge
you are sharing here! This list is a gem!
I started to replace my polyBLEP oscillators with waveTables to see how
it compares!
Although while experimenting with PolyBLEP I just run into something I
don't get and probably you will know the answer for this.
I read at a couple of places if you use a leaky integrator on a Square
then you can get a Triangle. But as a leaky integrator
is a first order lowpass filter, you won't get a Triangle waveform, but
https://www.dropbox.com/s/1xq321xqcb7ir3a/PolyBLEPTri.png?dl=0
Is it me doing something wrong misunderstanding the concept, or what is
the best way to make a triangle with PolyBLEPs?
thanks again for the great discussion on wavetables!
--
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Loading...