Discussion:
[time-nuts] Question about frequency counter testing
Oleg Skydan
2018-04-25 19:01:48 UTC
Permalink
Dear Ladies and Gentlemen,

Let me tell a little story so you will be able to better understand what my
question and what I am doing.

I needed to check frequency in several GHz range from time to time. I do not
need high absolute precision (anyway this is a reference oscillator problem,
not a counter), but I need fast high resolution instrument (at least 10
digits in one second). I have only a very old slow unit so, I constructed a
frequency counter (yes, yet another frequency counter project :-). I is a
bit unusual - I decided not to use interpolators and maximally simplify
hardware and provide the necessary resolution by very fast timestamping and
heavy math processing. In the current configuration I should get 11+ digits
in one second, for input frequencies more then 5MHz.

But this is theoretical number and it does not count for some factors. Now I
have an ugly build prototype with insanely simple hardware running the
counter core. And I need to check how well it performs.

I have already done some checks and even found and fixed some FW bugs :).
Now it works pretty well and I enjoyed looking how one OCXO drifts against
the other one in the mHz range. I would like to check how many significant
digits I am getting in reality.

The test setup now comprises of two 5MHz OCXO (those are very old units and
far from the perfect oscillators - the 1sec and 10sec stability is claimed
to be 1e-10, but they are the best I have now). I measure the frequency of
the first OCXO using the second one as counter reference. The frequency
counter processes data in real time and sends the continuous one second
frequency stamps to the PC. Here are experiment results - plots from the
Timelab. The frequency difference (the oscillators are being on for more
than 36hours now, but still drift against each other) and ADEV plots. There
are three measurements and six traces - two for each measurement. One for
the simple reciprocal frequency counting (with R letter in the title) and
one with the math processing (LR in the title). As far as I understand I am
getting 10+ significant digits of frequency in one second and it is
questionable if I see counter noise or oscillators one.

I also calculated the usual standard deviation for the measurements results
(and tried to remove the drift before the calculations), I got STD in the
3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.

Now the questions:
1. Are there any testing methods that will allow to determine if I see
oscillators noise or counter does not perform in accordance with the theory
(11+ digits)? I know this can be done with better OCXO, but currently I
cannot get better ones.
2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+
significant digits) right?

As far as I understand the situation I need better OCXO's to check if HW/SW
really can do 11+ significant digits frequency measurement in one second.

Your comments are greatly appreciated!

P.S. If I feed the counter reference to its input I got 13 absolutely stable
and correct digits and can get more, but this test method is not very useful
for the used counter architecture.

Thanks!
Oleg
73 de UR3IQO
Bob kb8tq
2018-04-25 22:28:40 UTC
Permalink
Hi

Unfortunately there is no “quick and dirty” way to come up with an accurate “number of digits” for a
math intensive counter. There are a *lot* of examples of various counter architectures that have specific
weak points in what they do. One sort of signal works one way, another signal works very differently.

All that said, the data you show suggests you are in the 10 digits per second range.

Bob
Post by Oleg Skydan
Dear Ladies and Gentlemen,
Let me tell a little story so you will be able to better understand what my question and what I am doing.
I needed to check frequency in several GHz range from time to time. I do not need high absolute precision (anyway this is a reference oscillator problem, not a counter), but I need fast high resolution instrument (at least 10 digits in one second). I have only a very old slow unit so, I constructed a frequency counter (yes, yet another frequency counter project :-). I is a bit unusual - I decided not to use interpolators and maximally simplify hardware and provide the necessary resolution by very fast timestamping and heavy math processing. In the current configuration I should get 11+ digits in one second, for input frequencies more then 5MHz.
But this is theoretical number and it does not count for some factors. Now I have an ugly build prototype with insanely simple hardware running the counter core. And I need to check how well it performs.
I have already done some checks and even found and fixed some FW bugs :). Now it works pretty well and I enjoyed looking how one OCXO drifts against the other one in the mHz range. I would like to check how many significant digits I am getting in reality.
The test setup now comprises of two 5MHz OCXO (those are very old units and far from the perfect oscillators - the 1sec and 10sec stability is claimed to be 1e-10, but they are the best I have now). I measure the frequency of the first OCXO using the second one as counter reference. The frequency counter processes data in real time and sends the continuous one second frequency stamps to the PC. Here are experiment results - plots from the Timelab. The frequency difference (the oscillators are being on for more than 36hours now, but still drift against each other) and ADEV plots. There are three measurements and six traces - two for each measurement. One for the simple reciprocal frequency counting (with R letter in the title) and one with the math processing (LR in the title). As far as I understand I am getting 10+ significant digits of frequency in one second and it is questionable if I see counter noise or oscillators one.
I also calculated the usual standard deviation for the measurements results (and tried to remove the drift before the calculations), I got STD in the 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
1. Are there any testing methods that will allow to determine if I see oscillators noise or counter does not perform in accordance with the theory (11+ digits)? I know this can be done with better OCXO, but currently I cannot get better ones.
2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ significant digits) right?
As far as I understand the situation I need better OCXO's to check if HW/SW really can do 11+ significant digits frequency measurement in one second.
Your comments are greatly appreciated!
P.S. If I feed the counter reference to its input I got 13 absolutely stable and correct digits and can get more, but this test method is not very useful for the used counter architecture.
Thanks!
Oleg
73 de UR3IQO
<1124.png><1127.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Azelio Boriani
2018-04-26 07:06:00 UTC
Permalink
Very fast time-stamping like a stable 5GHz counter? The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.
Post by Bob kb8tq
Hi
Unfortunately there is no “quick and dirty” way to come up with an accurate “number of digits” for a
math intensive counter. There are a *lot* of examples of various counter architectures that have specific
weak points in what they do. One sort of signal works one way, another signal works very differently.
All that said, the data you show suggests you are in the 10 digits per second range.
Bob
Post by Oleg Skydan
Dear Ladies and Gentlemen,
Let me tell a little story so you will be able to better understand what my question and what I am doing.
I needed to check frequency in several GHz range from time to time. I do not need high absolute precision (anyway this is a reference oscillator problem, not a counter), but I need fast high resolution instrument (at least 10 digits in one second). I have only a very old slow unit so, I constructed a frequency counter (yes, yet another frequency counter project :-). I is a bit unusual - I decided not to use interpolators and maximally simplify hardware and provide the necessary resolution by very fast timestamping and heavy math processing. In the current configuration I should get 11+ digits in one second, for input frequencies more then 5MHz.
But this is theoretical number and it does not count for some factors. Now I have an ugly build prototype with insanely simple hardware running the counter core. And I need to check how well it performs.
I have already done some checks and even found and fixed some FW bugs :). Now it works pretty well and I enjoyed looking how one OCXO drifts against the other one in the mHz range. I would like to check how many significant digits I am getting in reality.
The test setup now comprises of two 5MHz OCXO (those are very old units and far from the perfect oscillators - the 1sec and 10sec stability is claimed to be 1e-10, but they are the best I have now). I measure the frequency of the first OCXO using the second one as counter reference. The frequency counter processes data in real time and sends the continuous one second frequency stamps to the PC. Here are experiment results - plots from the Timelab. The frequency difference (the oscillators are being on for more than 36hours now, but still drift against each other) and ADEV plots. There are three measurements and six traces - two for each measurement. One for the simple reciprocal frequency counting (with R letter in the title) and one with the math processing (LR in the title). As far as I understand I am getting 10+ significant digits of frequency in one second and it is questionable if I see counter noise or oscillators one.
I also calculated the usual standard deviation for the measurements results (and tried to remove the drift before the calculations), I got STD in the 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
1. Are there any testing methods that will allow to determine if I see oscillators noise or counter does not perform in accordance with the theory (11+ digits)? I know this can be done with better OCXO, but currently I cannot get better ones.
2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ significant digits) right?
As far as I understand the situation I need better OCXO's to check if HW/SW really can do 11+ significant digits frequency measurement in one second.
Your comments are greatly appreciated!
P.S. If I feed the counter reference to its input I got 13 absolutely stable and correct digits and can get more, but this test method is not very useful for the used counter architecture.
Thanks!
Oleg
73 de UR3IQO
<1124.png><1127.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-04-26 16:32:38 UTC
Permalink
From: "Azelio Boriani" <***@gmail.com>
Sent: Thursday, April 26, 2018 10:06 AM
Post by Azelio Boriani
Very fast time-stamping like a stable 5GHz counter?
No, it is not 5GHz counter. It does the trick I first saw in CNT91
counters. The hardware is capable of capturing up to 10 millions
of timestamps per second and calculating LR "on the fly".

The plots I showed were made with approx. 5*10^6 timestamps
per second, so theoretically I should get approx. 4ps equivalent
resolution (or 11+ significant digits in one second).
Post by Azelio Boriani
The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.
I am not interesting in measuring timings of the single event,
and I did not try to make a full featured timer-counter-analyser.
It is just a high resolution RF frequency counter with very
simple all digital hardware.

Oleg
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-04-26 12:45:05 UTC
Permalink
Hi

Even with a fast counter, there are going to be questions about clock jitter and just
how well that last digit performs in the logic. It’s never easy to squeeze the very last
bit of performance out …..

Bob
Post by Azelio Boriani
Very fast time-stamping like a stable 5GHz counter? The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.
Post by Bob kb8tq
Hi
Unfortunately there is no “quick and dirty” way to come up with an accurate “number of digits” for a
math intensive counter. There are a *lot* of examples of various counter architectures that have specific
weak points in what they do. One sort of signal works one way, another signal works very differently.
All that said, the data you show suggests you are in the 10 digits per second range.
Bob
Post by Oleg Skydan
Dear Ladies and Gentlemen,
Let me tell a little story so you will be able to better understand what my question and what I am doing.
I needed to check frequency in several GHz range from time to time. I do not need high absolute precision (anyway this is a reference oscillator problem, not a counter), but I need fast high resolution instrument (at least 10 digits in one second). I have only a very old slow unit so, I constructed a frequency counter (yes, yet another frequency counter project :-). I is a bit unusual - I decided not to use interpolators and maximally simplify hardware and provide the necessary resolution by very fast timestamping and heavy math processing. In the current configuration I should get 11+ digits in one second, for input frequencies more then 5MHz.
But this is theoretical number and it does not count for some factors. Now I have an ugly build prototype with insanely simple hardware running the counter core. And I need to check how well it performs.
I have already done some checks and even found and fixed some FW bugs :). Now it works pretty well and I enjoyed looking how one OCXO drifts against the other one in the mHz range. I would like to check how many significant digits I am getting in reality.
The test setup now comprises of two 5MHz OCXO (those are very old units and far from the perfect oscillators - the 1sec and 10sec stability is claimed to be 1e-10, but they are the best I have now). I measure the frequency of the first OCXO using the second one as counter reference. The frequency counter processes data in real time and sends the continuous one second frequency stamps to the PC. Here are experiment results - plots from the Timelab. The frequency difference (the oscillators are being on for more than 36hours now, but still drift against each other) and ADEV plots. There are three measurements and six traces - two for each measurement. One for the simple reciprocal frequency counting (with R letter in the title) and one with the math processing (LR in the title). As far as I understand I am getting 10+ significant digits of frequency in one second and it is questionable if I see counter noise or oscillators one.
I also calculated the usual standard deviation for the measurements results (and tried to remove the drift before the calculations), I got STD in the 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
1. Are there any testing methods that will allow to determine if I see oscillators noise or counter does not perform in accordance with the theory (11+ digits)? I know this can be done with better OCXO, but currently I cannot get better ones.
2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ significant digits) right?
As far as I understand the situation I need better OCXO's to check if HW/SW really can do 11+ significant digits frequency measurement in one second.
Your comments are greatly appreciated!
P.S. If I feed the counter reference to its input I got 13 absolutely stable and correct digits and can get more, but this test method is not very useful for the used counter architecture.
Thanks!
Oleg
73 de UR3IQO
<1124.png><1127.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Azelio Boriani
2018-04-26 21:16:21 UTC
Permalink
If your hardware is capable of capturing up to 10 millions of
timestamps per second and calculating LR "on the fly", it is not a so
simple hardware, unless you consider simple hardware a 5megagates
Spartan3 (maybe more is needed). Moreover: if your clock is, say, at
most in an FPGA, 300MHz, your timestamps will have a one-shot
resolution of few nanoseconds. Where have you found a detailed
description of the CNT91 counting method? The only detailed
description I have found is the CNT90 (not 91) service manual and it
uses interpolators (page 4-13 of the service manual).
Post by Bob kb8tq
Hi
Even with a fast counter, there are going to be questions about clock jitter and just
how well that last digit performs in the logic. It’s never easy to squeeze the very last
bit of performance out …..
Bob
Post by Azelio Boriani
Very fast time-stamping like a stable 5GHz counter? The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.
Post by Bob kb8tq
Hi
Unfortunately there is no “quick and dirty” way to come up with an accurate “number of digits” for a
math intensive counter. There are a *lot* of examples of various counter architectures that have specific
weak points in what they do. One sort of signal works one way, another signal works very differently.
All that said, the data you show suggests you are in the 10 digits per second range.
Bob
Post by Oleg Skydan
Dear Ladies and Gentlemen,
Let me tell a little story so you will be able to better understand what my question and what I am doing.
I needed to check frequency in several GHz range from time to time. I do not need high absolute precision (anyway this is a reference oscillator problem, not a counter), but I need fast high resolution instrument (at least 10 digits in one second). I have only a very old slow unit so, I constructed a frequency counter (yes, yet another frequency counter project :-). I is a bit unusual - I decided not to use interpolators and maximally simplify hardware and provide the necessary resolution by very fast timestamping and heavy math processing. In the current configuration I should get 11+ digits in one second, for input frequencies more then 5MHz.
But this is theoretical number and it does not count for some factors. Now I have an ugly build prototype with insanely simple hardware running the counter core. And I need to check how well it performs.
I have already done some checks and even found and fixed some FW bugs :). Now it works pretty well and I enjoyed looking how one OCXO drifts against the other one in the mHz range. I would like to check how many significant digits I am getting in reality.
The test setup now comprises of two 5MHz OCXO (those are very old units and far from the perfect oscillators - the 1sec and 10sec stability is claimed to be 1e-10, but they are the best I have now). I measure the frequency of the first OCXO using the second one as counter reference. The frequency counter processes data in real time and sends the continuous one second frequency stamps to the PC. Here are experiment results - plots from the Timelab. The frequency difference (the oscillators are being on for more than 36hours now, but still drift against each other) and ADEV plots. There are three measurements and six traces - two for each measurement. One for the simple reciprocal frequency counting (with R letter in the title) and one with the math processing (LR in the title). As far as I understand I am getting 10+ significant digits of frequency in one second and it is questionable if I see counter noise or oscillators one.
I also calculated the usual standard deviation for the measurements results (and tried to remove the drift before the calculations), I got STD in the 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
1. Are there any testing methods that will allow to determine if I see oscillators noise or counter does not perform in accordance with the theory (11+ digits)? I know this can be done with better OCXO, but currently I cannot get better ones.
2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ significant digits) right?
As far as I understand the situation I need better OCXO's to check if HW/SW really can do 11+ significant digits frequency measurement in one second.
Your comments are greatly appreciated!
P.S. If I feed the counter reference to its input I got 13 absolutely stable and correct digits and can get more, but this test method is not very useful for the used counter architecture.
Thanks!
Oleg
73 de UR3IQO
<1124.png><1127.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-04-27 18:21:59 UTC
Permalink
From: "Azelio Boriani" <***@gmail.com>
Sent: Friday, April 27, 2018 12:16 AM
Post by Azelio Boriani
If your hardware is capable of capturing up to 10 millions of
timestamps per second and calculating LR "on the fly", it is not a so
simple hardware, unless you consider simple hardware a 5megagates
Spartan3 (maybe more is needed). Moreover: if your clock is, say, at
most in an FPGA, 300MHz, your timestamps will have a one-shot
resolution of few nanoseconds.
There is no FPGA. If I would use FPGA I probably should be able to get more
resolution for one shoot measurements, cause there are some simple methods
of interpolating signal inside FPGA (they do not require additional
hardware).
They can increase resolution by 2..16 times easily. So even with 200MHz
internal
FPGA clock it is possible to reach 1ns one shoot resolution or even better.

I will show details when the project will be at the finishing stage.
Post by Azelio Boriani
Where have you found a detailed
description of the CNT91 counting method? The only detailed
description I have found is the CNT90 (not 91) service manual and it
uses interpolators (page 4-13 of the service manual).
Sorry, I meant CNT-90, but I bet cnt90/cnt91 use the same technique. You
can use interpolator along with the math processing. This will result
in better resolution and/or you can use less memory and less processing
power.
I choose not use one to simplify the hardware.

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-04-29 17:01:48 UTC
Permalink
The CNT91 is really a CNT90 with some detailed improvements to reduce
time-errors to be conform with 50 ps rather than 100 ps resolution.

In the CNT90 the comparators where in the same IC, which caused
ground-bounce coupling between channels, but separating them was among
the things that went in. Also, improved grounding of the front-plate as
I recall it.

The core-clock is 100 MHz, giving 10 ns steps or coarse counter, the
interpolators then have 10 bits, so while not the full range is being
used, some 10-20 ps of actual resolution, but Pendulum engineers
consider the RMS performance as they measure the beat frequency sweep
over phase states.

Cheers,
Magnus
Post by Azelio Boriani
If your hardware is capable of capturing up to 10 millions of
timestamps per second and calculating LR "on the fly", it is not a so
simple hardware, unless you consider simple hardware a 5megagates
Spartan3 (maybe more is needed). Moreover: if your clock is, say, at
most in an FPGA, 300MHz, your timestamps will have a one-shot
resolution of few nanoseconds. Where have you found a detailed
description of the CNT91 counting method? The only detailed
description I have found is the CNT90 (not 91) service manual and it
uses interpolators (page 4-13 of the service manual).
Post by Bob kb8tq
Hi
Even with a fast counter, there are going to be questions about clock jitter and just
how well that last digit performs in the logic. It’s never easy to squeeze the very last
bit of performance out …..
Bob
Post by Azelio Boriani
Very fast time-stamping like a stable 5GHz counter? The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.
Post by Bob kb8tq
Hi
Unfortunately there is no “quick and dirty” way to come up with an accurate “number of digits” for a
math intensive counter. There are a *lot* of examples of various counter architectures that have specific
weak points in what they do. One sort of signal works one way, another signal works very differently.
All that said, the data you show suggests you are in the 10 digits per second range.
Bob
Post by Oleg Skydan
Dear Ladies and Gentlemen,
Let me tell a little story so you will be able to better understand what my question and what I am doing.
I needed to check frequency in several GHz range from time to time. I do not need high absolute precision (anyway this is a reference oscillator problem, not a counter), but I need fast high resolution instrument (at least 10 digits in one second). I have only a very old slow unit so, I constructed a frequency counter (yes, yet another frequency counter project :-). I is a bit unusual - I decided not to use interpolators and maximally simplify hardware and provide the necessary resolution by very fast timestamping and heavy math processing. In the current configuration I should get 11+ digits in one second, for input frequencies more then 5MHz.
But this is theoretical number and it does not count for some factors. Now I have an ugly build prototype with insanely simple hardware running the counter core. And I need to check how well it performs.
I have already done some checks and even found and fixed some FW bugs :). Now it works pretty well and I enjoyed looking how one OCXO drifts against the other one in the mHz range. I would like to check how many significant digits I am getting in reality.
The test setup now comprises of two 5MHz OCXO (those are very old units and far from the perfect oscillators - the 1sec and 10sec stability is claimed to be 1e-10, but they are the best I have now). I measure the frequency of the first OCXO using the second one as counter reference. The frequency counter processes data in real time and sends the continuous one second frequency stamps to the PC. Here are experiment results - plots from the Timelab. The frequency difference (the oscillators are being on for more than 36hours now, but still drift against each other) and ADEV plots. There are three measurements and six traces - two for each measurement. One for the simple reciprocal frequency counting (with R letter in the title) and one with the math processing (LR in the title). As far as I understand I am getting 10+ significant digits of frequency in one second and it is questionable if I see counter noise or oscillators one.
I also calculated the usual standard deviation for the measurements results (and tried to remove the drift before the calculations), I got STD in the 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
1. Are there any testing methods that will allow to determine if I see oscillators noise or counter does not perform in accordance with the theory (11+ digits)? I know this can be done with better OCXO, but currently I cannot get better ones.
2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ significant digits) right?
As far as I understand the situation I need better OCXO's to check if HW/SW really can do 11+ significant digits frequency measurement in one second.
Your comments are greatly appreciated!
P.S. If I feed the counter reference to its input I got 13 absolutely stable and correct digits and can get more, but this test method is not very useful for the used counter architecture.
Thanks!
Oleg
73 de UR3IQO
<1124.png><1127.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hal Murray
2018-04-26 19:28:05 UTC
Permalink
The plots I showed were made with approx. 5*10^6 timestamps per second, so
theoretically I should get approx. 4ps equivalent resolution (or 11+
significant digits in one second).
Is there a term for what I think you are doing?

If I understand (big if), you are doing the digital version of magic
down-conversion with an A/D. I can't even think of the name for that.

If I have a bunch of digital samples and count the transitions I can conpute
a frequency. But I would get the same results if the input frequency was X
plus the sampling frequency. Or 2X. ... The digital stream is the beat
between the input and the sampling frequency.

That technique depends on having a low jitter clock. There should be some
good math in there, but I don't see it.

A related trick is getting the time from something that ticks slowly, like
the RTC/CMOS clocks on PCs. They only tick once per second, but you can get
the time with (much) higher resolution if you poll until it ticks.

Don't forget about metastability.
--
These are my opinions. I hate spam.



_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-04-26 21:28:20 UTC
Permalink
From: "Hal Murray" <***@megapathdsl.net>
Sent: Thursday, April 26, 2018 10:28 PM
Post by Hal Murray
Is there a term for what I think you are doing?
I saw different terms like "omega counter" or multiple time-stamp
average counter, probably there are others too.
Post by Hal Murray
If I understand (big if), you are doing the digital version of magic
down-conversion with an A/D. I can't even think of the name for that.
No, it is much simpler. The hardware saves time-stamps to the memory at
each (event) rise of the input signal (let's consider we have digital logic
input signal for simplicity). So after some time we have many pairs of
{event number, time-stamp}. We can plot those pairs with event number on
X-axis and time on Y-axis, now if we fit the line on that dataset the
inverse slope of the line will correspond to the estimated frequency.

The line is fitted using linear regression.

This technique improves frequency uncertainty as

2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))

So If I have 2.5ns HW time resolution, and collect 5e6 events,
processing should result in 3.9ps resolution.

Of cause this is for the ideal case. The first real life problem is
signal drift for example.

Hope I was able to tell of what I am doing.

BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
Probably I should look for better OCXOs. Interesting thing - the counter
processed 300GB of time-stamps data during that 8+hour run :).

All the best!
Oleg
Bob kb8tq
2018-04-27 00:22:22 UTC
Permalink
Hi

The degree to which your samples converge to a specific value while being averaged
is dependent on a bunch of things. The noise processes on the clock and the measured
signal are pretty hard to avoid. It is *very* easy to over estimate how fast things converge.

Bob
Post by Oleg Skydan
Sent: Thursday, April 26, 2018 10:28 PM
Post by Hal Murray
Is there a term for what I think you are doing?
I saw different terms like "omega counter" or multiple time-stamp
average counter, probably there are others too.
Post by Hal Murray
If I understand (big if), you are doing the digital version of magic
down-conversion with an A/D. I can't even think of the name for that.
No, it is much simpler. The hardware saves time-stamps to the memory at
each (event) rise of the input signal (let's consider we have digital logic
input signal for simplicity). So after some time we have many pairs of
{event number, time-stamp}. We can plot those pairs with event number on
X-axis and time on Y-axis, now if we fit the line on that dataset the
inverse slope of the line will correspond to the estimated frequency.
The line is fitted using linear regression.
This technique improves frequency uncertainty as
2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))
So If I have 2.5ns HW time resolution, and collect 5e6 events,
processing should result in 3.9ps resolution.
Of cause this is for the ideal case. The first real life problem is
signal drift for example.
Hope I was able to tell of what I am doing.
BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
Probably I should look for better OCXOs. Interesting thing - the counter
processed 300GB of time-stamps data during that 8+hour run :).
All the best!
Oleg
<1133.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-04-27 17:22:50 UTC
Permalink
Hi

So what’s going on here?

With any of a number of modern (and not so modern) FPGA’s you can run a clock in the 400 MHz region.
Clocking with a single edge gives you a 2.5 ns resolution. On some parts, you are not limited to a single
edge. You can clock with both the rising and falling edge of the clock. That gets you to 1.25 ns. For the
brave, there is the ability to phase shift the clock and do the trick yet one more time. That can get you
to 0.6125 ns. You may indeed need to drive more than one input to get that done.

As you get more and more fancy, the chip timing gets further into your data. A very simple analogy is
the non-uniform step size you see on an ADC. Effectively you have a number that has a +/- ?.?? sort
of tolerance on it. As before that may not what you expect in a frequency counter. It still does not mean
the the data is trash. You just have a source of error to contend with.

You could also feed the data down a “wave union” style delay chain. That would get you into the 100ps
range with further linearity issues to contend with. There are also calibration issues as well as temperature
and voltage dependencies. Even the timing in the multi phase clock approach will have some voltage
and temperature dependency.

Since it’s an FPGA, coming up with a lot of resources is not all that crazy expensive. You aren’t buying
gate chips and laying out a PCB. A few thousand logic blocks is tiny by modern standards. Your counter
or delay line ideal might fit in < 100 logic blocks. There’s lots of room for pipelines and I/O this and that.
The practical limit is how much you want to put into the “pipe” that gets the data out of the FPGA.

In the end, you still are still stuck with the fact that many of the various TDC chips have higher resolution / lower cost.
You also have a pretty big gap between raw chip price and what a fully developed instrument will run.
That’s true regardless of what you base it on and how you do the design.

Bob
Post by Oleg Skydan
Sent: Thursday, April 26, 2018 10:28 PM
Post by Hal Murray
Is there a term for what I think you are doing?
I saw different terms like "omega counter" or multiple time-stamp
average counter, probably there are others too.
Post by Hal Murray
If I understand (big if), you are doing the digital version of magic
down-conversion with an A/D. I can't even think of the name for that.
No, it is much simpler. The hardware saves time-stamps to the memory at
each (event) rise of the input signal (let's consider we have digital logic
input signal for simplicity). So after some time we have many pairs of
{event number, time-stamp}. We can plot those pairs with event number on
X-axis and time on Y-axis, now if we fit the line on that dataset the
inverse slope of the line will correspond to the estimated frequency.
The line is fitted using linear regression.
This technique improves frequency uncertainty as
2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))
So If I have 2.5ns HW time resolution, and collect 5e6 events,
processing should result in 3.9ps resolution.
Of cause this is for the ideal case. The first real life problem is
signal drift for example.
Hope I was able to tell of what I am doing.
BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
Probably I should look for better OCXOs. Interesting thing - the counter
processed 300GB of time-stamps data during that 8+hour run :).
All the best!
Oleg
<1133.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hal Murray
2018-04-27 09:30:01 UTC
Permalink
No, it is much simpler. The hardware saves time-stamps to the memory at each
(event) rise of the input signal (let's consider we have digital logic input
signal for simplicity). So after some time we have many pairs of {event
number, time-stamp}. We can plot those pairs with event number on X-axis and
time on Y-axis, now if we fit the line on that dataset the inverse slope of
the line will correspond to the estimated frequency.
I like it. Thanks.

If you flip the X-Y axis, then you don't have to invert the slope.

That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.

------

Another way to look at things is that you have a fast 1 bit A/D.

If you need results in a second, FFTing that might fit into memory. (Or you
could rent a big-memory cloud server. A quick sample found 128GB for
$1/hour.) That's with 1 second of data. I don't know how long it would take
to process.

What's the clock frequency? Handwave. At 1 GHz, 1 second of samples fits
into a 4 byte integer even if all the energy ends up in one bin. 4 bytes, *2
for complex, *2 for input and output is 16 GB.
--
These are my opinions. I hate spam.



_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-04-27 13:38:42 UTC
Permalink
Hi

Consider a case where the clocks and signals are all clean and stable:

Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.

Bob
Post by Hal Murray
No, it is much simpler. The hardware saves time-stamps to the memory at each
(event) rise of the input signal (let's consider we have digital logic input
signal for simplicity). So after some time we have many pairs of {event
number, time-stamp}. We can plot those pairs with event number on X-axis and
time on Y-axis, now if we fit the line on that dataset the inverse slope of
the line will correspond to the estimated frequency.
I like it. Thanks.
If you flip the X-Y axis, then you don't have to invert the slope.
That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.
------
Another way to look at things is that you have a fast 1 bit A/D.
If you need results in a second, FFTing that might fit into memory. (Or you
could rent a big-memory cloud server. A quick sample found 128GB for
$1/hour.) That's with 1 second of data. I don't know how long it would take
to process.
What's the clock frequency? Handwave. At 1 GHz, 1 second of samples fits
into a 4 byte integer even if all the energy ends up in one bin. 4 bytes, *2
for complex, *2 for input and output is 16 GB.
--
These are my opinions. I hate spam.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Azelio Boriani
2018-04-27 14:32:28 UTC
Permalink
Yes, this is the problem when trying to enhance the resolution from a
low one-shot resolution. Averaging 2.5ns resolution samples can give
data only if clocks move one with respect to the other and "cross the
boundary" of the 2.5ns sampling interval. You can measure your clocks
down to the ps averaged resolution you want only if they are worse
than your one-shot base resolution one WRT the other.
Post by Bob kb8tq
Hi
Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.
Bob
Post by Hal Murray
No, it is much simpler. The hardware saves time-stamps to the memory at each
(event) rise of the input signal (let's consider we have digital logic input
signal for simplicity). So after some time we have many pairs of {event
number, time-stamp}. We can plot those pairs with event number on X-axis and
time on Y-axis, now if we fit the line on that dataset the inverse slope of
the line will correspond to the estimated frequency.
I like it. Thanks.
If you flip the X-Y axis, then you don't have to invert the slope.
That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.
------
Another way to look at things is that you have a fast 1 bit A/D.
If you need results in a second, FFTing that might fit into memory. (Or you
could rent a big-memory cloud server. A quick sample found 128GB for
$1/hour.) That's with 1 second of data. I don't know how long it would take
to process.
What's the clock frequency? Handwave. At 1 GHz, 1 second of samples fits
into a 4 byte integer even if all the energy ends up in one bin. 4 bytes, *2
for complex, *2 for input and output is 16 GB.
--
These are my opinions. I hate spam.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Azelio Boriani
2018-04-27 14:39:59 UTC
Permalink
You can measure your clocks down to the ps averaged resolution you
want only if they are worse than your one-shot base resolution one WRT
the other. In a resonable time, that is how many transitions in your
2.5ns sampling interval you have in 1 second to have a n-digit/second
counter.

On Fri, Apr 27, 2018 at 4:32 PM, Azelio Boriani
Post by Azelio Boriani
Yes, this is the problem when trying to enhance the resolution from a
low one-shot resolution. Averaging 2.5ns resolution samples can give
data only if clocks move one with respect to the other and "cross the
boundary" of the 2.5ns sampling interval. You can measure your clocks
down to the ps averaged resolution you want only if they are worse
than your one-shot base resolution one WRT the other.
Post by Bob kb8tq
Hi
Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.
Bob
Post by Hal Murray
No, it is much simpler. The hardware saves time-stamps to the memory at each
(event) rise of the input signal (let's consider we have digital logic input
signal for simplicity). So after some time we have many pairs of {event
number, time-stamp}. We can plot those pairs with event number on X-axis and
time on Y-axis, now if we fit the line on that dataset the inverse slope of
the line will correspond to the estimated frequency.
I like it. Thanks.
If you flip the X-Y axis, then you don't have to invert the slope.
That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.
------
Another way to look at things is that you have a fast 1 bit A/D.
If you need results in a second, FFTing that might fit into memory. (Or you
could rent a big-memory cloud server. A quick sample found 128GB for
$1/hour.) That's with 1 second of data. I don't know how long it would take
to process.
What's the clock frequency? Handwave. At 1 GHz, 1 second of samples fits
into a 4 byte integer even if all the energy ends up in one bin. 4 bytes, *2
for complex, *2 for input and output is 16 GB.
--
These are my opinions. I hate spam.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Tom Van Baak
2018-04-27 15:58:37 UTC
Permalink
Azelio, the problem with that approach is that the more stable and accurate your DUT & REF sources the less likely there will be transitions, even during millions of samples over one second.

A solution is to dither the clock, which is something many old hp frequency counters did. In other words, you deliberately introduce well-designed noise so that you cross clock edge transitions as *much* as possible. It seems counter-intuitive that adding noise can vastly improve your measurement, but in the case of oversampling counters like this, it does.

/tvb

----- Original Message -----
From: "Azelio Boriani" <***@gmail.com>
To: "Discussion of precise time and frequency measurement" <time-***@febo.com>
Sent: Friday, April 27, 2018 7:39 AM
Subject: Re: [time-nuts] Question about frequency counter testing


You can measure your clocks down to the ps averaged resolution you
want only if they are worse than your one-shot base resolution one WRT
the other. In a resonable time, that is how many transitions in your
2.5ns sampling interval you have in 1 second to have a n-digit/second
counter.

On Fri, Apr 27, 2018 at 4:32 PM, Azelio Boriani
Post by Azelio Boriani
Yes, this is the problem when trying to enhance the resolution from a
low one-shot resolution. Averaging 2.5ns resolution samples can give
data only if clocks move one with respect to the other and "cross the
boundary" of the 2.5ns sampling interval. You can measure your clocks
down to the ps averaged resolution you want only if they are worse
than your one-shot base resolution one WRT the other.
Post by Bob kb8tq
Hi
Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.
Bob
Post by Hal Murray
No, it is much simpler. The hardware saves time-stamps to the memory at each
(event) rise of the input signal (let's consider we have digital logic input
signal for simplicity). So after some time we have many pairs of {event
number, time-stamp}. We can plot those pairs with event number on X-axis and
time on Y-axis, now if we fit the line on that dataset the inverse slope of
the line will correspond to the estimated frequency.
I like it. Thanks.
If you flip the X-Y axis, then you don't have to invert the slope.
That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.
------
Another way to look at things is that you have a fast 1 bit A/D.
If you need results in a second, FFTing that might fit into memory. (Or you
could rent a big-memory cloud server. A quick sample found 128GB for
$1/hour.) That's with 1 second of data. I don't know how long it would take
to process.
What's the clock frequency? Handwave. At 1 GHz, 1 second of samples fits
into a 4 byte integer even if all the energy ends up in one bin. 4 bytes, *2
for complex, *2 for input and output is 16 GB.
--
These are my opinions. I hate spam.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-04-27 16:08:11 UTC
Permalink
Hi

Since you are using averaging to get more bits (much like a CIC ) the idea that you
need noise to make it happen is actually pretty common. There are app notes
coming at it from various directions on ADC’s and SDR going *way* back (like to
when I was in school …. yikes ….).

What is a bit odd is working your head around it being needed in this case.
It *is* sort of a 1 bit A/D. There is a very tenuous connection.

Bottom line is still that you are doing signal processing. Since that is what is going
on, you very much need to get into the grubby details to work out what the limitations
(and benefits) will be. It doesn’t *look* like a radio, but it has a lot of SDR-like issues.

Bob
Post by Tom Van Baak
Azelio, the problem with that approach is that the more stable and accurate your DUT & REF sources the less likely there will be transitions, even during millions of samples over one second.
A solution is to dither the clock, which is something many old hp frequency counters did. In other words, you deliberately introduce well-designed noise so that you cross clock edge transitions as *much* as possible. It seems counter-intuitive that adding noise can vastly improve your measurement, but in the case of oversampling counters like this, it does.
/tvb
----- Original Message -----
Sent: Friday, April 27, 2018 7:39 AM
Subject: Re: [time-nuts] Question about frequency counter testing
You can measure your clocks down to the ps averaged resolution you
want only if they are worse than your one-shot base resolution one WRT
the other. In a resonable time, that is how many transitions in your
2.5ns sampling interval you have in 1 second to have a n-digit/second
counter.
On Fri, Apr 27, 2018 at 4:32 PM, Azelio Boriani
Post by Azelio Boriani
Yes, this is the problem when trying to enhance the resolution from a
low one-shot resolution. Averaging 2.5ns resolution samples can give
data only if clocks move one with respect to the other and "cross the
boundary" of the 2.5ns sampling interval. You can measure your clocks
down to the ps averaged resolution you want only if they are worse
than your one-shot base resolution one WRT the other.
Post by Bob kb8tq
Hi
Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.
Bob
Post by Hal Murray
No, it is much simpler. The hardware saves time-stamps to the memory at each
(event) rise of the input signal (let's consider we have digital logic input
signal for simplicity). So after some time we have many pairs of {event
number, time-stamp}. We can plot those pairs with event number on X-axis and
time on Y-axis, now if we fit the line on that dataset the inverse slope of
the line will correspond to the estimated frequency.
I like it. Thanks.
If you flip the X-Y axis, then you don't have to invert the slope.
That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.
------
Another way to look at things is that you have a fast 1 bit A/D.
If you need results in a second, FFTing that might fit into memory. (Or you
could rent a big-memory cloud server. A quick sample found 128GB for
$1/hour.) That's with 1 second of data. I don't know how long it would take
to process.
What's the clock frequency? Handwave. At 1 GHz, 1 second of samples fits
into a 4 byte integer even if all the energy ends up in one bin. 4 bytes, *2
for complex, *2 for input and output is 16 GB.
--
These are my opinions. I hate spam.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-04-27 18:47:03 UTC
Permalink
Hi

From: "Bob kb8tq" <***@n1k.org>
Sent: Friday, April 27, 2018 4:38 PM
Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.
Thanks a lot for pointing me to this problem! It looks like that was the
reason I lost a digit. The frequency in my experiment appear to be close to
the exact subharmonic of the PLL multiplied reference. It was not less than
2.5ppb off frequency (the difference was approx 0.3ppm), but it still was
close enough to degrade the resolution.

Fortunately it can be fixed in firmware using various methods and I have
made the necessary changes. Here are Allan deviation and frequency drift
plots. The first one with the old firmware, the second one with the updated
firmware that count for the lost of information you mention.

The frequency difference plot also shows the measurement "noise" now is much
lower. It looks like I have got 11 significant digits now and my old OCXOs
are better than manufacturer claims by almost 10 times.

Thanks!
Oleg UR3IQO
Bob kb8tq
2018-04-27 19:46:50 UTC
Permalink
Hi

As you have noticed already, it is amazingly easy to get data plots with more than the
real number and less than the real number of digits. Only careful analysis of the underlying
hardware and firmware will lead to an accurate estimate of resolution.

This is by no means unique to what you are doing. Commercial counters are
very often falling into this trap. If you hook up a SR-620 to it’s internal standard,
you will see a *lot* of very perfect looking digits …. they aren’t real. The HP 5313x
counters have issues with integer related inputs / reference. This isn’t easy.

Bob
Post by Oleg Skydan
Hi
Sent: Friday, April 27, 2018 4:38 PM
Post by Bob kb8tq
Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second. Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.
Thanks a lot for pointing me to this problem! It looks like that was the reason I lost a digit. The frequency in my experiment appear to be close to the exact subharmonic of the PLL multiplied reference. It was not less than 2.5ppb off frequency (the difference was approx 0.3ppm), but it still was close enough to degrade the resolution.
Fortunately it can be fixed in firmware using various methods and I have made the necessary changes. Here are Allan deviation and frequency drift plots. The first one with the old firmware, the second one with the updated firmware that count for the lost of information you mention.
The frequency difference plot also shows the measurement "noise" now is much lower. It looks like I have got 11 significant digits now and my old OCXOs are better than manufacturer claims by almost 10 times.
Thanks!
Oleg UR3IQO
<1137.png><1138.png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-05-10 16:23:21 UTC
Permalink
Hi

More or less:

ADEV takes the *difference* between phase samples and then does a standard
deviation on them. RMS of the phase samples makes a lot of sense and it was
used back in the late 50’s / early 60’s. The gotcha turns out to be that it is an
ill behaved measure. The more data you take, the bigger the number you get.
( = it does not converge ). That problem is what lead NBS to dig into a better
measure. The result was ADEV.

The point about averaging vs decimation relates to what you do to the data *before*
you ever compute the ADEV. If you have 0.1 second samples, you have to do something
to get to a tau of 1 second or 10 seconds or … The process you use to get the data
to the proper interval turns out to matter quite a bit.

Bob
I'm a bit fuzzy, then, on the definition of ADEV. I was under the
impression that one measured a series of
"phase samples" at the desired spacing, then took the RMS value of that
series, not just a single sample,
as the ADEV value.
Can anybody say which it is? The RMS approach seems to make better sense
as it provides some measure
of defense against taking a sample that happens to be an outlier, yet
avoids the flaw of tending to average
the reported ADEV towards zero.
Dana (K8YUM)
Hi
If you collect data over the entire second and average that down for a
single point, then no, your ADEV will not be correct.
There are a number of papers on this. What ADEV wants to see is a single
phase “sample” at one second spacing. This is
also at the root of how you get 10 second ADEV. You don’t average the ten
1 second data points. You throw nine data points
away and use one of them ( = you decimate the data ).
What happens if you ignore this? Your curve looks “to good”. The resultant
curve is *below* the real curve when plotted.
A quick way to demonstrate this is to do ADEV with averaged vs decimated
data ….
Bob
Hi
I have got a pair of not so bad OCXOs (Morion GK85). I did some
measurements, the results may be interested to others (sorry if not), so I
decided to post them.
I ran a set of 5minutes long counter runs (two OCXOs were measured
against each other), each point is 1sec gate frequency measurement with
different number of timestamps used in LR calculation (from 10 till 5e6).
The counter provides continuous counting. As you can see I reach the HW
limitations at 5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results
looks reasonable, the theory predicts 27ps equivalent resolution with 1e5
timestamps, also the sqrt(N) law is clearly seen on the plots. I do not
know what is the limiting factor, if it is OCXOs or some counter HW.
I know there are HW problems, some of them were identified during this
experiment. They were expectable, cause HW is still just an ugly
construction made from the boards left in the "radio junk box" from the
other projects/experiments. I am going to move to the well designed PCB
with some improvements in HW (and more or less "normal" analog frontend
with good comparator, ADCMP604 or something similar, for the "low
frequency" input). But I want to finish my initial tests, it should help
with the HW design.
Now I have some questions. As you know I am experimenting with the
counter that uses LR calculations to improve its resolution. The LR data
for each measurement is collected during the gate time only, also
measurements are continuous. Will the ADEV be calculated correctly from
such measurements? I understand that any averaging for the time window
larger then single measurement time will spoil the ADEV plot. Also I
understand that using LR can result in incorrect frequency estimate for the
signal with large drift (should not be a problem for the discussed
measurements, at least for the numbers we are talking about).
Does the ADEV plots I got looks reasonable for the used "mid range"
OCXOs (see the second plot for the long run test)?
BTW, I see I can interface GPS module to my counter without additional
HW (except the module itself, do not worry it will not be another DIY
GPSDO, probably :-) ). I will try to do it. The initial idea is not try to
lock the reference OCXO to GPS, instead I will just measure GPS against REF
and will make corrections using pure math in SW. I see some advantages with
such design - no hi resolution DAC, reference for DAC, no loop, no
additional hardware at all - only the GPS module and software :) (it is in
the spirit of this project)... Of cause I will not have reference signal
that can be used outside the counter, I think I can live with it. It worth
to do some experiments.
Best!
Oleg UR3IQO
<Снимок экрана (1148).png><Снимок экрана (1150).png><Снимок экрана
(1149).png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-12 21:44:13 UTC
Permalink
Hi
Hi!
There is still the problem that the first post on the graph is different depending
on the technique.
The leftmost tau values are skipped and they "stay" inside the counter. If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will generate internally 1/8sec averaged measurements, but export combined data for 1s stamps. The result will be strictly speaking different, but the difference should be insignificant.
Except here are a *lot* of papers where they demonstrate that the difference may be *very* significant. I would
suggest that the “is significant’ group is actually larger than the “is not” group.
There is no reason to treat it light-handed as about the same, as they
become different measures, where there is a measurement bias. Depending
on what you do, there might be a bias function to compensate the bias
with... or not. Even when there is, most people forget to apply it.

Stay clear of it and do it properly.

Averaging prior to ADEV does nothing really useful unless it is
well-founded, and then we call it MDEV and PDEV, and then you have to be
careful about the details to do it proper. Otherwise you just waste your
time to get "improved numbers" which does not actually help you to give
proper measures.
The other side of all this is that ADEV is really not a very good way to test a counter.
Counter testing was not a main reason to dig into statistics details last days. Initially I used ADEV when tried to test the idea of making the counter with very simple HW and good resolution (BTW, it appeared later it was not ADEV in reality :). Then I saw it worked, so I decided to make a "normal" useful counter (I liked the HW/SW concept). The HW has enough power to compute various statistics onboard in real time, and while it is not requisite feature of the project now, I think it will be good if the counter will be able to do it (or at least if it will export data suitable to do it in post process). The rest of the story you know :)
Again, ADEV is tricky and sensitive to various odd things. This whole debate about it being sensitive goes
back to the original papers in the late 1960’s and 1970’s. At every paper I attended the issue of averaging
and bandwidth came up in the questions after the paper. The conversation has been going on for a *long*
time.
If you go back to Dr. David Allan's Feb 1966 paper, you clearly see how
white and flicker phase modulation noise depend on the bandwidth, and
then assumed to be brick-wall filter. Your ability to reflect the
amplitude of those noises properly thus depends on the bandwidth.

Any filtering reduces the bandwidth, and hence artificially reduces the
ADEV value for the same amount of actual noise, then it is not
representing the underlying noise properly. However, if you use this for
improving your frequency measurements, it's fine and the processed ADEV
will represent the counters performance with that filter. Thus, the aim
will govern if you should or should not do the pre-filtering.
If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self.
Yes, but if it can be done with only some additional code - why not to have such ability? Even if it has some known limitations it is still a useful addition. Of cause it should be done as good as it can be with the HW limitations. Also it was/is a good educational moment.
It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.
Exactly.
Now it is period of tests/experiments to see the used technology features/limitations(of cause if those experiments can be done with the current "ugly style HW"). I have already got a lot of useful information, it should help me in the following HW/FW development. The next steps are analog front end and GPS frequency correction (I should get the GPS module next week). I have already tested the 6GHz prescaler and now wait for some parts to finish it. Hope this project will have the "happy end" :).
I’m sure it will come out to be a very cool counter. My *only* concern here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.
Bob and I are picky, and for a reasoon. When we want our ADEV plots, we
want them done properly, or else we can improve the specs of the
oscillators by changing how fancy post-processing we do on the
counter-data. Yes, we see this in professional conferences too.

Mumble... BAD SCIENCE!

Metrology correct measures takes care for details.
Measurements needs to be repeatable and of correct value.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-11 18:23:35 UTC
Permalink
Oleg,
Hi
--------------------------------------------------
The most accurate answer is always “that depends”. The simple answer
is no.
I have spent the yesterday evening and quite a bit of the night :)
reading many interesting papers and several related discussions in the
time-nuts archive (the Magnus Danielson posts in "Modified Allan
Deviation and counter averaging" and "Omega counters and Parabolic
Variance (PVAR)" topics were very informative and helpful, thanks!).
You are welcome. Good that people have use for them.
It looks like the trick to combine averaging with the possibility of
correct ADEV calculation in the post processing exists. There is a nice
presentation made by prof. Rubiola [1]. There is a suitable solution on
page 54 (at least I understood it so, maybe I am wrong). I can switch to
usual averaging (Lambda/Delta counter) instead of LR calculation (Omega
counter), the losses should be very small I my case. With such averaging
the MDEV can be correctly computed.
If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:

https://arxiv.org/abs/1604.01004

Need to update that one.
If ADEV is needed, the averaging
interval can be reduced and several measurements (more then eight) can
be combined into one point (creating the new weighting function which
resembles the usual Pi one, as shown in the [1] p.54), it should be
possible to calculate usual ADEV using such data. As far as I
understand, the filter which is formed by the resulting weighting
function will have wider bandwidth, so the impact on ADEV will be
smaller and it can be computed correctly. Am I missing something?
Well, you can reduce averaging interval to 1 and then you compute the
ADEV, but it does not behave as the MDEV any longer.

What you can do is that you can calculate MDEV or PDEV, and then apply
the suitable bias function to convert the level to that of ADEV.
I have made the necessary changes in code, now firmware computes the
Delta averaging, also it computes combined Delta averaged measurements
(resulting in trapezoidal weighting function), both numbers are computed
with continuous stamping and optimal overlapping. Everything is done in
real time. I did some tests. The results are very similar to the ones
made with LR counting.
Yes, they give relatively close values of deviation, where PDEV goes
somewhat lower, indicating that there is a slight advantage of the LR/LS
frequency estimation measure over that of the Delta counter, as given by
it's MDEV.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-11 20:15:07 UTC
Permalink
Hi,
Hi
If you do the weighted average as indicated in the paper *and* compare it to a “single sample” computation,
the results are different for that time interval. To me that’s a problem. To the authors, the fact that the rest of
the curve is the same is proof that it works. I certainly agree that once you get to longer tau, the process
has no detrimental impact. There is still the problem that the first post on the graph is different depending
on the technique.
Check what I did in my paper. I made sure to check that my estimator of
phase and frequency is bias-free, that is, when exposed to stable phase
or stable frequency, that comes out of the phase and frequency unbiased,
but 0 as you switch them, as a good estimator should do.
The other side of all this is that ADEV is really not a very good way to test a counter. It has it’s quirks and it’s
issues. They are impacted by what is in a counter, but that’s a side effect. If one is after a general test of
counter hardware, one probably should look at other approaches.
Well, you can tell a few things from the ADEV, to give you a hint about
what you can expect from that counter when you do ADEV... and measure of
frequency. The 1/tau limit is that of the counter. It's... a complex
issue of single-shot resolution and noise, but a hint.
If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self. It’s not
clear that re-invinting the hardware is required to do this. Going with an “average down” approach ultimately
*will* have problems for certain signals and noise profiles.
The filtering needs to be understood and handled correctly, for sure,
and it's not doing anything good for lower true ADEV measures. Filtering
helps for improving the frequency reading, as the measures deviation
shifts from ADEV to MDEV or PDEV, but let's not confuse that with
improving the ADEV, it's a completely different thing. Improving the
ADEV takes single-shot resolution, stable hardware and stable reference
source.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-12 18:52:16 UTC
Permalink
Hi Oleg,
Hi!
There is still the problem that the first post on the graph is
different depending
on the technique.
The leftmost tau values are skipped and they "stay" inside the counter.
If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
will generate internally 1/8sec averaged measurements, but export
combined data for 1s stamps. The result will be strictly speaking
different, but the difference should be insignificant.
What is your motivation for doing this?

I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-13 13:36:42 UTC
Permalink
Hi Oleg,
Hi Magnus!
Post by Magnus Danielson
The leftmost tau values are skipped and they "stay" inside the counter.
If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
will generate internally 1/8sec averaged measurements, but export
combined data for 1s stamps. The result will be strictly speaking
different, but the difference should be insignificant.
What is your motivation for doing this?
My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
primary interested in high frequency signals (not one shoot events), and
HW is able to collect and process some millions of timestamps
continuously. So in Delta or Omega mode I can improve the resolution in
theory down to several ps (for 1s measurement interval). In reality the
limit will be somewhat higher.
Fair enough.
So I can compute the classical ADEV (using Pi mode) with a lot of
counter noise at low tau (it will probably not be very useful due to the
counter noise dominance in the leftmost part of ADEV plot), or MDEV
(using Delta mode) with the counter noise much lower.
Yes, it helps you to suppress noise. As you extend the measures, you
need to do it properly to maintain MDEV property.
I would like to try to use the excessive data I have to increase counter
resolution in a manner that ADEV calculation with such preprocessing is
still possible with acceptable accuracy. After Bob's explanations and
some additional reading I was almost sure it is not possible (and it is
so in general case), but then I saw the presentation
http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola,
High resolution time and frequency counters, updated version) and saw
inferences on p.54. They looks reasonable and it is just what I wanted
to do.
OK, when you do this you really want to filter out the first lower tau,
but as you get out of the filtered part, or rather, when the dominant
part of the ADEV processing is within the filter bandwidth, the biasing
becomes less.

I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.

Also, on this page you can see how the system bandwith f_H affects white
PM and flicker PM for Allan, but not modified Allan.
Post by Magnus Danielson
The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.
That makes sense.
We can view at the problem in the frequency domain. We have a DUT,
reference and instrument (counter) noise. In most cases we are
interested in suppressing instrument and reference noise and leaving the
DUT noise. The reference and DUT has more or less the same nature of
noise, so it should not be possible to filter out reference noise
without affecting DUT noise (with the simple HW). The counter noise (in
my case) will look like white noise (at least the noise associated with
the absence of the HW interpolator). When we process timestamps by Omega
or Delta data processing we apply filter, so the correctness of the
resulting data will depend of the DUT noise characteristics and filter
shape. The ADEV calculation at tau > tau0 will also apply some sort of
filter during decimation, it also should be counted for (cause we
actually decimate the high rate timestamp stream making the points data
for the following postprocessing). Am I right?
As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement. There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.

For short taus, the systematic noise of quantization will create a 1/tau
limit in ADEV. This is in fact more complex than this simple model, but
let's just assume this for the moment, it is sufficient for the time
being and is what most assume anyway.

ADEV however does not really do decimation. It does combine measurements
to form longer observation time of frequency estimation, and subtract
these before squaring, to form the 2-point deviation, which we call
Allan's deviation or Allan deviation.

ADEV is designed to match how a simple counters deviation would become.
Here is a good illustration how averaging affects ADEV
http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part
of the ADEV affected by averaging, the resulting averaging effects on
the ADEV are minimized. Also they can be minimized by the optimal
averaging strategy. The question is optimal averaging strategy and well
defined restrictions when such preprocessing can be applied.
Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.

You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.

With biases, you have something similar to, but not being _the_ ADEV.

ITU-T have made standardization on TDEV measurements where they put
requirements on these things, and there a similar strategy was used,
putting a requirement on number of samples per second and bandwidth,
such that when the tau becomes high enough the bias would be tolerable
low for the range of taus that is being measured.
If it works I would like to add such mode for the compatibility with the
widely spread post processing SW (TimeLab is a good example). Of cause I
can do calculations inside the counter without such limitations, but
that will be another data processing option(s) (which might not be
always suitable).
Post by Magnus Danielson
I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.
The end goal is to have a counter mode when the counter produces data
suitable for post processing for ADEV and other similar statistics with
resolution better (or counter noise lower) that one shoot one (Pi
counter). I understand that, if it will be possible, the counter
resolution will be degraded compared to usual Omega or Delta mode, also
there will be some limitations for the DUT noise when such processing
can be applied.
Like in the ITU-T case, sure you can use filtering, but one needs to
drop the lower tau part to approximate the ADEV.
Post by Magnus Danielson
Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.
I have no HW interpolator. The similar problem in the firmware was
discussed earlier and now it is fixed.
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Post by Magnus Danielson
Post by Magnus Danielson
If fact, you can do a Omega-style counter you can use for PDEV, you
just
need to use the right approach to be able to decimate the data. Oh,
https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will
add the features to the counter to calculate correct PDEV.
It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.
I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.
Exactly what you mean by FW now I don't get, for me that is FPGA code.
Post by Magnus Danielson
You do not want to mix pre-filtering and ADEV that way. We can do things
better.
Are you talking about PDEV?
MDEV and PDEV is better approaches. They continue the filtering action,
and allow for decimation of data with known filtering properties.
Post by Magnus Danielson
Here is another question - how to correctly calculate averaging length
in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
counters process 5e6 samples totally and one measurement have also 5e6
samples, but the Delta one processes 10e6 totally with each of the
averaged measurement having 5e6 samples. Delta counter actually used two
times more data. What should be equal when comparing different counter
types - the number of samples in one measurement (gating time) or the
total number of samples processed?
How does you get so different event-rates?
If you have 5 MHz, the rising edge gives you 5E6 events, and which type
of processing you do, Pi (none), Delta or Omega, is just different types
of post-processing on the raw phase data-set.
So, if I want to compare "apples to apples" (comparing Delta and
Omega/Pi processing) the single measurement of the Delta counter should
use a half of the events (2.5E6) and the same number(2.5e6) of
measurements should be averaged, is that right? The counter in Delta
mode currently calculates results with 50% overlapping, it gives 10e6
stamps for the 1s output data rate (the counter averages 2 seconds of
data).
Do not report overlapping like that. Report that separate from the event
rate you have. So, ok, if you have a basic rate of 2.5e6 events per
second, and overlapping processing for the Delta, doubling the Delta
processing report rate.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-05-13 21:13:17 UTC
Permalink
Hi Magnus,
Post by Magnus Danielson
I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.
As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement.
Probably I did not express what I meant clearly. I understand that we can
not separate them, but if the DUT noise has most of the power inside the
filter BW while instrument noise is wideband one, we can filter out part of
instrument noise with minimal influence to the DUT one.
Post by Magnus Danielson
There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.
Yes, I am aware of that technique. I event did some experiments with cross
correlation phase noise measurements.
Post by Magnus Danielson
Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.
You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.
With biases, you have something similar to, but not being _the_ ADEV.
OK. It looks like the last sentence very precisely describes what I was
going to do, so we understood each other right. Summarizing the discussion,
as far as I understand, the best strategy regarding *DEV calculations is:
1. Make MDEV the primary variant. It is suitable for calculation inside
counter as well as for exporting data for the following post processing.
2. Study how PDEV calculation fits on the used HW. If it is possible to do
in real time PDEV option can be added.
3. ADEV can be safely calculated only from the Pi mode counter data.
Probably it will not be very useful because of low single shoot resolution,
but Pi mode and corresponding data export can be easily added.

I think it will be more than enough for my needs, at least now.
Post by Magnus Danielson
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
Post by Magnus Danielson
I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.
Exactly what you mean by FW now I don't get, for me that is FPGA code.
I meant MCU code, to make things clearer I can use the SW term for it.

Thank you for the answers and explanations, they are highly appreciated!

All the best!
Oleg

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-05-13 22:47:10 UTC
Permalink
Hi
Post by Oleg Skydan
Hi Magnus,
Post by Magnus Danielson
I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.
As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement.
Probably I did not express what I meant clearly. I understand that we can not separate them, but if the DUT noise has most of the power inside the filter BW while instrument noise is wideband one, we can filter out part of instrument noise with minimal influence to the DUT one.
Post by Magnus Danielson
There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.
Yes, I am aware of that technique. I event did some experiments with cross correlation phase noise measurements.
Post by Magnus Danielson
Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.
You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.
With biases, you have something similar to, but not being _the_ ADEV.
1. Make MDEV the primary variant. It is suitable for calculation inside counter as well as for exporting data for the following post processing.
2. Study how PDEV calculation fits on the used HW. If it is possible to do in real time PDEV option can be added.
3. ADEV can be safely calculated only from the Pi mode counter data. Probably it will not be very useful because of low single shoot resolution, but Pi mode and corresponding data export can be easily added.
I think it will be more than enough for my needs, at least now.
Post by Magnus Danielson
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
I think I would spend more time working out what happens at “about 400 MHz” X N or
“about 400 MHz / M” …….

Bob
Post by Oleg Skydan
Post by Magnus Danielson
I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.
Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.
Exactly what you mean by FW now I don't get, for me that is FPGA code.
I meant MCU code, to make things clearer I can use the SW term for it.
Thank you for the answers and explanations, they are highly appreciated!
All the best!
Oleg
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-05-14 09:25:07 UTC
Permalink
Hi Bob!
Post by Bob kb8tq
Post by Oleg Skydan
I think it will be more than enough for my needs, at least now.
Post by Magnus Danielson
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
I think I would spend more time working out what happens at “about 400 MHz” X N or
“about 400 MHz / M” …….
If such conditions detected, I avoid problem by changing the counter clock.
But it does not solve the effects at "about OCXO" * N or "about OCXO" / M.
It is related to HW and I can probably control it only partially. I will try
to improve clock and reference isolation in the "normal" HW and of cause I
will thoroughly test such frequencies when that HW will be ready.

All the best!
Oleg

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-05-14 13:13:28 UTC
Permalink
Hi
Hi Bob!
Post by Oleg Skydan
I think it will be more than enough for my needs, at least now.
Post by Magnus Danielson
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
I think I would spend more time working out what happens at “about 400 MHz” X N or
“about 400 MHz / M” …….
If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready.
It’s a very common problem in this sort of counter. The “experts” have a lot of trouble with it
on their designs. One answer with simple enough hardware could be to run *two* clocks
all the time. Digitize them both and process the results from both. …. just a thought …. You
still have the issue of a frequency that is a multiple (or sub multiple) of both clocks. With
some care in clock selection you could make that a pretty rare occurrence ( thus making
it easy to identify in firmware ….).


Bob
All the best!
Oleg
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-05-14 17:50:21 UTC
Permalink
Hi!
Post by Bob kb8tq
Post by Oleg Skydan
If such conditions detected, I avoid problem by changing the counter
clock. But it does not solve the effects at "about OCXO" * N or "about
OCXO" / M. It is related to HW and I can probably control it only
partially. I will try to improve clock and reference isolation in the
"normal" HW and of cause I will thoroughly test such frequencies when
that HW will be ready.
It’s a very common problem in this sort of counter. The “experts” have a
lot of trouble with it
on their designs. One answer with simple enough hardware could be to run *two* clocks
all the time. Digitize them both and process the results from both.
I thought about such solution, unfortunately it can not be implemented
because of HW limitations. Switching 400MHz clock is also not ideal
solution, cause it will make trouble to GPS correction calculations, the
latter can be fixed in software, but it is not an elegant solution. It all
still needs some polishing...
Post by Bob kb8tq
still have the issue of a frequency that is a multiple (or sub multiple)
of both clocks.
The clocks (if we are talking about 400MHz) has very interesting values like
397501220.703Hz or 395001831.055Hz , so it will really occur very rarely.
Also I am not limited by two or three values, so clock switching should
solve the problem, but not in elegant way, cause it breaks normal work of
GPS frequency correction algorithm, so additional steps to fix that will be
required :-\.

BTW, after quick check of the GPS module specs and OCXO's one it looks like
a very simple algorithm can be used for frequency correction. OCXO frequency
can be measured against GPS for a long enough period (some thousands of
seconds, LR algorithm can be used here also) and we have got a correction
coefficient. It can be updated at a rate of one second (probably we do not
need to do it as fast). I do not believe it can be as simple. I feel I
missed something :)...

All the best!
Oleg

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-05-14 19:15:18 UTC
Permalink
Hi
Hi!
Post by Bob kb8tq
If such conditions detected, I avoid problem by changing the counter clock. But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It is related to HW and I can probably control it only partially. I will try to improve clock and reference isolation in the "normal" HW and of cause I will thoroughly test such frequencies when that HW will be ready.
It’s a very common problem in this sort of counter. The “experts” have a lot of trouble with it
on their designs. One answer with simple enough hardware could be to run *two* clocks
all the time. Digitize them both and process the results from both.
I thought about such solution, unfortunately it can not be implemented because of HW limitations. Switching 400MHz clock is also not ideal solution, cause it will make trouble to GPS correction calculations, the latter can be fixed in software, but it is not an elegant solution. It all still needs some polishing...
Post by Bob kb8tq
still have the issue of a frequency that is a multiple (or sub multiple) of both clocks.
The clocks (if we are talking about 400MHz) has very interesting values like 397501220.703Hz or 395001831.055Hz , so it will really occur very rarely. Also I am not limited by two or three values, so clock switching should solve the problem, but not in elegant way, cause it breaks normal work of GPS frequency correction algorithm, so additional steps to fix that will be required :-\.
What I’m suggesting is that if the hardware is very simple and very cheap, simply put two chips on the board.
One runs at Clock A and the other runs at Clock B. At some point in the process you move the decimated data
from B over to A and finish out all the math there ….
BTW, after quick check of the GPS module specs and OCXO's one it looks like a very simple algorithm can be used for frequency correction. OCXO frequency can be measured against GPS for a long enough period (some thousands of seconds, LR algorithm can be used here also) and we have got a correction coefficient. It can be updated at a rate of one second (probably we do not need to do it as fast). I do not believe it can be as simple. I feel I missed something :)…
That is one way it is done. A lot depends on the accuracy of the GPS PPS on your module. It is unfortunately fairly easy to find
modules that are in the 10’s of ns error on a second to second basis. Sawtooth correction can help this a bit. OCXO’s have warmup
characteristics that also can move them a bit in the first hours of use.

More or less, with a thousand second observation time you will likely get below parts in 10^-10, but maybe not to the 1x10^-11 level.

Bob
All the best!
Oleg
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-05-15 11:47:59 UTC
Permalink
Hi
Post by Bob kb8tq
What I’m suggesting is that if the hardware is very simple and very cheap,
simply put two chips on the board.
One runs at Clock A and the other runs at Clock B. At some point in the
process you move the decimated data
from B over to A and finish out all the math there ….
The hardware is simple and cheap cause it is all digital, requires no
calibrations and the same HW is capable of driving TFT, UI handling,
providing all control functionalities for input conditioning circuits, GPS
module and etc. It also provides USB interface for data exchange or remote
control. So doubling it is not a way to go if I want to keep things simple
and relatively cheap.

I think I will stay with the current plans for HW and try to handle some
troubles to GPS timing in software. I have to make initial variant of HW, so
I will be able to move on with the SW part towards useful counter. Then I
will see how well it performs and will decide if it satisfies the
requirements or I need to change something.
Post by Bob kb8tq
Post by Oleg Skydan
BTW, after quick check of the GPS module specs and OCXO's one it looks
like a very simple algorithm can be used for frequency correction. OCXO
frequency can be measured against GPS for a long enough period (some
thousands of seconds, LR algorithm can be used here also) and we have got
a correction coefficient. It can be updated at a rate of one second
(probably we do not need to do it as fast). I do not believe it can be as
simple. I feel I missed something :)…
That is one way it is done. A lot depends on the accuracy of the GPS PPS
on your module.
The module is uBlox NEO-6M, I know there is better suited for my needs
NEO-6T, but the first one was easy to get and insanely cheap. It should be
enough to start.
Post by Bob kb8tq
More or less, with a thousand second observation time you will likely get
below parts in 10^-10, but maybe not to the 1x10^-11 level.
1e-10 should satisfy my requirements. More sophisticated algorithm can be
developed and used later, if needed.

Thanks!
Oleg

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-05-18 20:51:10 UTC
Permalink
Hi!

--------------------------------------------------
Post by Oleg Skydan
Post by Magnus Danielson
From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
Yes. It is approx. 400MHz.
OK, good to have that verified. Free-running or locked to a 10 MHz
reference?
Locked to OCXO (10MHz).
OK. I saw some odd frequencies, and I agree with Bob that if you can,
using two of those with non-trivial relationship can be used to get
really good performance.
I can use two or more, but unfortunately not simultaneously. So I will
switch frequency if the problem is detected. Switching will interact with
GPS data processing, but that probably can be fixed in software (I had no
time to investigate the possible solutions and find the best one yet).

BTW, the single shoot resolution can be doubled (to 1.25ns) with almost no
additional HW (just a delay line for a bit more than 1.25ns and some
resistors). Not sure if it worth to do (it also will halve the timestamping
speed and double the timestamps memory requirements, so, in averaging modes
it will be only sqrt(2) improvement).

All the best!
Oleg

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-05-27 15:52:12 UTC
Permalink
Hi!
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly construction"
one, but I hope I will have some time to make normal HW - I have already got
almost all components I need).

I had to modify the original decimation scheme you propose in the paper, so
it better fits my HW, also the calculation precision and speed should be
higher now. The nice side effect - I do not need to care about phase
unwrapping anymore. I can prepare a short description of the modifications
and post it here, if it is interesting.

It works like a charm!

The new algorithm (base on C and D sums calculation and decimation) uses
much less memory (less than 256KB for any gaiting time/sampling speed, the
old one (direct LR calculation) was very memory hungry - it used
4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS). Now I
can fit all data into the internal memory and have a single chip digital
part of the frequency counter, well, almost single chip ;) The timestamping
speed has increased and is limited now by the bus/bus matrix switch/DMA unit
at a bit more then 24MSPS with continuous real time data processing. It
looks like it is the limit for the used chip (I expected a bit higher
numbers). The calculation speed is also much higher now (approx 23ns per one
timestamp, so up to 43MSPS can be processed in realtime). I plan to stay at
20MSPS rate or 10MSPS with the double time resolution (1.25ns). It will
leave a plenty of CPU time for the UI/communication/GPS/statistics stuff.

I will probably throw out the power hungry and expensive SDRAM chip or use
much smaller one :).

I have some plans to experiment with doubling the one shoot resolution down
to 1.25ns. I see no much benefits from it, but it can be made with just a
piece of coax and a couple of resistors, so it is interesting to try :).

All the best!
Oleg UR3IQO


_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-27 16:58:56 UTC
Permalink
Hi Oleg,
Post by Oleg Skydan
Hi!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly
construction" one, but I hope I will have some time to make normal HW -
I have already got almost all components I need).
I had to modify the original decimation scheme you propose in the paper,
so it better fits my HW, also the calculation precision and speed should
be higher now.
The point about the decimation scheme I did was to provide a toolbox,
and as long as you respect the rules within that toolbox you can adapt
it just as you like. As long as the sums C and D becomes correct, your
path to it can be whatever.
Post by Oleg Skydan
The nice side effect - I do not need to care about phase
unwrapping anymore.
You should always care about how that works out, and if you play your
cards right, it works out very smoothly.
Post by Oleg Skydan
I can prepare a short description of the
modifications and post it here, if it is interesting.
Yes please do, then I can double check it.
Post by Oleg Skydan
It works like a charm!
Good. :)
Post by Oleg Skydan
The new algorithm (base on C and D sums calculation and decimation) uses
much less memory (less than 256KB for any gaiting time/sampling speed,
the old one (direct LR calculation) was very memory hungry - it used
4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS).
This is one of the benefits of this. Assuming the same tau0, it is all
contained in the C, D and N triplet, and the memory need of these values
can be trivially analyzed, but it is very small, so it's a really
effective decimation technique while maintaining the least-square
properties.
Post by Oleg Skydan
Now I can fit all data into the internal memory and have a single chip
digital part of the frequency counter, well, almost single chip ;) The
timestamping speed has increased and is limited now by the bus/bus
matrix switch/DMA unit at a bit more then 24MSPS with continuous real
time data processing. It looks like it is the limit for the used chip (I
expected a bit higher numbers).
Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.
Post by Oleg Skydan
The calculation speed is also much higher now (approx 23ns per one
timestamp, so up to 43MSPS can be processed in realtime).
Just to indicate that my claim for "High speed" is not completely wrong.

For each time-stamp, the pseudo-code becomes:

C = C + x_0
D = D + n*x_0
n = n + 1

Whenever n reaches N, C and D is output, and the values C, D and n is
set to 0.

However, this may be varied in several fun ways, but is left over as an
exercise for the implementer. Much of the other complexity is gone, so
this is the fun problem.
Post by Oleg Skydan
I plan to stay at 20MSPS rate or 10MSPS with the
double time resolution (1.25ns). It will leave a plenty of CPU time for
the UI/communication/GPS/statistics stuff.
Sounds like a good plan.
Post by Oleg Skydan
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.
Post by Oleg Skydan
I have some plans to experiment with doubling the one shoot resolution
down to 1.25ns. I see no much benefits from it, but it can be made with
just a piece of coax and a couple of resistors, so it is interesting to
try :).
Please report on that progress! Sounds fun!

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-06-06 21:18:57 UTC
Permalink
Hi Oleg,
Hi, Magnus!
Sorry for the late answer, I injured my left eye last Monday, so had
very limited abilities to use computer.
Sorry to hear that. Hope you heal up well and quick enough.
Post by Magnus Danielson
As long as the sums C and D becomes correct, your
path to it can be whatever.
Yes. It produces the same sums.
Post by Magnus Danielson
Yes please do, then I can double check it.
I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.
You should add the basic formula

x_{N_1+n} = x_{N_1} + x_n^0

prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).

Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.

Anyway, you have sure understood the toolbox given to you, and your
contribution is to play the same game, but to reduce the needed dynamics
of the blocks. Neat. I may include that with due reference.
Post by Magnus Danielson
Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.
If you are talking about adding traditional HW interpolation of the
trigger events I have no plans to do it. It is not possible to do it
keeping 2.5ns base counter resolution (there is no way to output 400MHz
clock signal out of the chip) and I do not want to add extra complexity
to the HW of this project.
But, the HW I use can simultaneously sample up to 10 timestamps. So, I
can push the one shoot resolution down to 250ps using several delay
lines (theoretically). I do not think that going down to 250ps has much
sense (also I have another plans for that additional HW), but 2x or 4x
one shot resolution improvement (down to 1.25ns or 625ps) is relatively
simple to implement in HW and should be a good idea to try.
Sounds fun!
Post by Magnus Danielson
Post by Oleg Skydan
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.
Yes, I want to produce multi-tau PDEV plots :).
Make good sense. :)
They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor
(with the practical durations of the measurements). I have a working
code that realizes such algorithm. It uses only 272bytes of memory for
each decade (1-2-5 values).
Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.
I need to think how to do the overlapping PDEV calculations with minimal
memory/processing power requirements (I am aware that decimation
routines should not use the overlapped calculations).
It's fairly simple, as you decimate samples and/or blocks, the produced
blocks overlaps one way or another. The multiple overlap variants should
each behave as a complete PDEV stream, and the variances can then be
added safely.
BTW, are there any "optimal overlapping"? Or I should just use as much
data as I can process?
"optimal overlapping" would be when all overlapping variants is used,
that is all with tau0 offsets available. When done for Allan Deviation
some refer to this as OADEV. This is however an misnomer as it is an
ADEV estimator which just has better confidence intervals than the
non-overlapping ADEV estimator. Thus, both estimator algorithms have the
same scale of measure, that of ADEV, but different amount of Equivalent
Degrees of Freedom (EDF) which has direct implications on the confidence
interval bounds. The more EDF, the better confidence interval. The more
overlapping, the more EDF. Further improvements would be TOTAL ADEV and
Theo, which both aim to squeeze out as much EDF as possible from the
dataset, in an attempt of reducing the length of measurement.
Post by Magnus Danielson
Please report on that progress! Sounds fun!
I will drop a note when I will move on the next step. The things are a
bit slower now.
Take care. Heal up properly. It's a hobby after all. :)

Good work there.

Cheers,
Magnus
Thanks!
Oleg
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Oleg Skydan
2018-06-21 13:05:26 UTC
Permalink
Hi!
Post by Magnus Danielson
I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.
You should add the basic formula
x_{N_1+n} = x_{N_1} + x_n^0
prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).
I have corrected the document and put it here (it should be clearer now):
http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf

It should be more clear now.
Post by Magnus Danielson
Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.
Oh! I should notice it, thanks for the valuable comment!
Post by Magnus Danielson
They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor
(with the practical durations of the measurements). I have a working
code that realizes such algorithm. It uses only 272bytes of memory for
each decade (1-2-5 values).
Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.
I have optimized continuous PDEV calculation algorithm, and it uses only
140bytes per decade now.

I will not probably implement overlapping PDEV calculations to keep the
things simple (with no external memory) and will just do the continuous PDEV
calculations only. The more sophisticated calculations can be easily done on
the PC side.
Post by Magnus Danielson
... but 2x or 4x
one shot resolution improvement (down to 1.25ns or 625ps) is relatively
simple to implement in HW and should be a good idea to try.
So, I tried it with a "quick and dirty" HW. It appeared to be not as simple
in real life :) There was a problem (probably the crosstalk or grounding
issue) which leaded to unstable phase measurements. So, I got no
improvements (the results with 1.25ns resolution were worse then with the
2.5ns resolution). I have to do more experiments with better HW
implementation.
Post by Magnus Danielson
Take care. Heal up properly. It's a hobby after all. :)
Thanks!

Best!
Oleg

_______________________________________________
time-nuts mailing list -- time-***@lists.febo.com
To unsubscribe, go to https://lists.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-06-23 16:40:04 UTC
Permalink
Hi Oleg,
Post by Oleg Skydan
Hi!
Post by Magnus Danielson
I have write a note and attached it. The described modifications to the
original method were successfully tested on my experimental HW.
You should add the basic formula
x_{N_1+n} = x_{N_1} + x_n^0
prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).
http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf
It should be more clear now.
It is much better now. You should consider publish that, with more
description of the surrounding setup.
Post by Oleg Skydan
Post by Magnus Danielson
Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.
Oh! I should notice it, thanks for the valuable comment!
Well, you should realize that it is exactly sums like these that I need
to solve for the full processing-trick, so it was natural and should be
used even for this application of the basic approach.
Post by Oleg Skydan
Post by Magnus Danielson
They can be computed with small memory footprint, but it will be non
overlapped PDEVs, so the confidence level at large taus will be poor
(with the practical durations of the measurements). I have a working
code that realizes such algorithm. It uses only 272bytes of memory for
each decade (1-2-5 values).
Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.
I have optimized continuous PDEV calculation algorithm, and it uses only
140bytes per decade now.
I will not probably implement overlapping PDEV calculations to keep the
things simple (with no external memory) and will just do the continuous
PDEV calculations only. The more sophisticated calculations can be
easily done on the PC side.
Notice that tau0, N, C and D should be delivered to the PC, one way or
another. To continue the processing that is what you need to extend it,
so you do not want to produce phase or frequency measures.
Post by Oleg Skydan
Post by Magnus Danielson
... but 2x or 4x
one shot resolution improvement (down to 1.25ns or 625ps) is relatively
simple to implement in HW and should be a good idea to try.
So, I tried it with a "quick and dirty" HW. It appeared to be not as
simple in real life :) There was a problem (probably the crosstalk or
grounding issue) which leaded to unstable phase measurements. So, I got
no improvements (the results with 1.25ns resolution were worse then with
the 2.5ns resolution). I have to do more experiments with better HW
implementation.
Yes, for that type of time, you need to get good separation, where
ground-bounce can be troublesome. It's a future improvement if you can
learn how to design that part properly.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@lists.febo.com
To unsubscribe, go to https://lists.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Glenn Little WB4UIV
2018-05-27 17:02:19 UTC
Permalink
The MSDS is here:
https://simplegreen.com/data-sheets/

They claim that it is non reactive and chemically stable.
It is for water tolerant surfaces and should be rinsed.
Probably due to the citric acid.

Glenn
Post by Oleg Skydan
Hi!
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software
processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and
experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly
construction" one, but I hope I will have some time to make normal HW
- I have already got almost all components I need).
I had to modify the original decimation scheme you propose in the
paper, so it better fits my HW, also the calculation precision and
speed should be higher now. The nice side effect - I do not need to
care about phase unwrapping anymore. I can prepare a short description
of the modifications and post it here, if it is interesting.
It works like a charm!
The new algorithm (base on C and D sums calculation and decimation)
uses much less memory (less than 256KB for any gaiting time/sampling
speed, the old one (direct LR calculation) was very memory hungry - it
used 4xSampling_Rate bytes/s - 20MB per second of the gate time for
5MSPS). Now I can fit all data into the internal memory and have a
single chip digital part of the frequency counter, well, almost single
chip ;) The timestamping speed has increased and is limited now by the
bus/bus matrix switch/DMA unit at a bit more then 24MSPS with
continuous real time data processing. It looks like it is the limit
for the used chip (I expected a bit higher numbers). The calculation
speed is also much higher now (approx 23ns per one timestamp, so up to
43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or
10MSPS with the double time resolution (1.25ns). It will leave a
plenty of CPU time for the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
I have some plans to experiment with doubling the one shoot resolution
down to 1.25ns. I see no much benefits from it, but it can be made
with just a piece of coax and a couple of resistors, so it is
interesting to try :).
All the best!
Oleg UR3IQO
_______________________________________________
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
-----------------------------------------------------------------------
Glenn Little ARRL Technical Specialist QCWA LM 28417
Amateur Callsign: WB4UIV ***@arrl.net AMSAT LM 2178
QTH: Goose Creek, SC USA (EM92xx) USSVI LM NRA LM SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Glenn Little WB4UIV
2018-05-27 17:03:31 UTC
Permalink
It appears that I replied to the wrong message, please ignore.

Glenn
Post by Oleg Skydan
Hi!
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software
processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.
I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and
experiments,
thanks!
Thanks for the kind words!
It looks like the proposed method of decimation can be
efficiently realized on the current HW.
I had some free time yesterday and today, so I decided to test the new
algorithms on the real hardware (the HW is still an old "ugly
construction" one, but I hope I will have some time to make normal HW
- I have already got almost all components I need).
I had to modify the original decimation scheme you propose in the
paper, so it better fits my HW, also the calculation precision and
speed should be higher now. The nice side effect - I do not need to
care about phase unwrapping anymore. I can prepare a short description
of the modifications and post it here, if it is interesting.
It works like a charm!
The new algorithm (base on C and D sums calculation and decimation)
uses much less memory (less than 256KB for any gaiting time/sampling
speed, the old one (direct LR calculation) was very memory hungry - it
used 4xSampling_Rate bytes/s - 20MB per second of the gate time for
5MSPS). Now I can fit all data into the internal memory and have a
single chip digital part of the frequency counter, well, almost single
chip ;) The timestamping speed has increased and is limited now by the
bus/bus matrix switch/DMA unit at a bit more then 24MSPS with
continuous real time data processing. It looks like it is the limit
for the used chip (I expected a bit higher numbers). The calculation
speed is also much higher now (approx 23ns per one timestamp, so up to
43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or
10MSPS with the double time resolution (1.25ns). It will leave a
plenty of CPU time for the UI/communication/GPS/statistics stuff.
I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).
I have some plans to experiment with doubling the one shoot resolution
down to 1.25ns. I see no much benefits from it, but it can be made
with just a piece of coax and a couple of resistors, so it is
interesting to try :).
All the best!
Oleg UR3IQO
_______________________________________________
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
-----------------------------------------------------------------------
Glenn Little ARRL Technical Specialist QCWA LM 28417
Amateur Callsign: WB4UIV ***@arrl.net AMSAT LM 2178
QTH: Goose Creek, SC USA (EM92xx) USSVI LM NRA LM SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-12 21:20:51 UTC
Permalink
Hi,
Hi!
ADEV assumes brick-wall filtering up to the Nyquist frequency as result
of the sample-rate. When you filter the data as you do a Linear
Regression / Least Square estimation, the actual bandwidth will be much
less, so the ADEV measures will be biased for lower taus, but for higher
taus less of the kernel of the ADEV will be affected by the filter and
thus the bias will reduce.
Thanks for clarification. Bob already pointed me to problem and after
some reading *DEV theme seems to be clearer.
The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.
Does the ADEV plots I got looks reasonable for the used "mid range"
OCXOs (see the second plot for the long run test)?
You probably want to find the source of the wavy response as the orange
and red trace.
I have already found the problem. It is HW problem related to poor
isolation between reference OCXO signal and counter input signal clock
line (it is also possible there are some grounding or power supply
decoupling problems - the HW is made in "ugly construction" style). When
the input clock frequency is very close (0.3..0.4Hz difference) to the
OCXO subharmonic this problem become visible (it is not FW problem
discussed before, cause counter reference is not a harmonic of the OCXO
anymore).
Make sense. Cross-talk has been performance limit of several counters,
and care should be taken to reduce it.
It looks like some commercial counters suffers from that
problem too. After I connected OCXO and input feed lines with short
pieces of the coax this effect greatly decreased, but not disappeared.
Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.
The "large N" plots were measured with the input signal 1.4Hz (0.3ppm)
higher then 1/2 subharmonic  of the OCXO frequency, with such frequency
difference that problem completely disappears. I will check for this
problem again when I will move the HW to the normal PCB.
Yes.
If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
https://arxiv.org/abs/1604.01004
Thanks for the document. It needs some time to study and maybe I will
add the features to the counter to calculate correct PDEV.
It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.
If ADEV is needed, the averaging
interval can be reduced and several measurements (more then eight) can
be combined into one point (creating the new weighting function which
resembles the usual Pi one, as shown in the [1] p.54), it should be
possible to calculate usual ADEV using such data. As far as I
understand, the filter which is formed by the resulting weighting
function will have wider bandwidth, so the impact on ADEV will be
smaller and it can be computed correctly. Am I missing something?
Well, you can reduce averaging interval to 1 and then you compute the
ADEV, but it does not behave as the MDEV any longer.
With no averaging it will be a simple reciprocal counter with time
resolution of only 2.5ns. The idea was to use trapezoidal weighting, so
the counter will become somewhere "between" Pi and Delta counters. When
the upper base of the weighting function trapezium is 0 length
(triangular weighting) it is usual Delta counter, if it is infinitely
long the result should converge to usual Pi counter. Prof. Rubiola
claims if the ratio of upper to lower base is more than 8/9 the ADEV
plots made from such data should be sufficiently close to usual ADEV. Of
cause the gain from the averaging will be at least 3 times less than
from the usual Delta averaging.
You do not want to mix pre-filtering and ADEV that way. We can do things
better.
Maybe I need to find or make "not so good" signal source and measure its
ADEV using above method and compare with the traditional. It should be
interesting experiment.
It is always good to experiment and learn from both not so stable stuff,
stuff with significant drift and very stable stuff.
What you can do is that you can calculate MDEV or PDEV, and then apply
the suitable bias function to convert the level to that of ADEV.
That can be done if the statistics is calculated inside the counter, but
it will not make the exported data suitable for post processing with the
TimeLab or other software that is not aware of what is going on inside
the counter.
Exactly. You need to continue the processing in the counter for the
post-processing to produce good post-processing for unbias values.
There is many ways to mess it up.
Yes, they give relatively close values of deviation, where PDEV goes
somewhat lower, indicating that there is a slight advantage of the LR/LS
frequency estimation measure over that of the Delta counter, as given by
it's MDEV.
Here is another question - how to correctly calculate averaging length
in Delta counter? I have 5e6 timestamps in one second, so Pi and Omega
counters process 5e6 samples totally and one measurement have also 5e6
samples, but the Delta one processes 10e6 totally with each of the
averaged measurement having 5e6 samples. Delta counter actually used two
times more data. What should be equal when comparing different counter
types - the number of samples in one measurement (gating time) or the
total number of samples processed?
How does you get so different event-rates?

If you have 5 MHz, the rising edge gives you 5E6 events, and which type
of processing you do, Pi (none), Delta or Omega, is just different types
of post-processing on the raw phase data-set.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob kb8tq
2018-05-13 13:49:12 UTC
Permalink
Hi

I guess it is time to ask:

Is this a commercial product you are designing?

If so, that raises a whole added layer to this discussion in terms of “does it do
what it says it does?”.

Bob
Hi Bob!
It’s only useful if it is accurate. Since you can “do code” that gives you results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it needs to be correct.
I understand it, so I try to investigate the problem to understand what can be done (if any :).
I’m sure it will come out to be a very cool counter. My *only* concern here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that is accurate.
I am interested in accurate results or at least with well defined limitations for a few specific measurements/modes. So I will try to make results as accurate as I can do/it can be done keeping simple hardware.
Thanks!
Oleg
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-13 18:40:52 UTC
Permalink
Hi,
Post by Bob kb8tq
If so, that raises a whole added layer to this discussion in terms of “does it do
what it says it does?”.
This question is also important for amateur/hobby measurement equipment. I do not need equipment that "does not do what it says it does" even if it is build for hobby use.
The theme about *DEV calculations has many important details I want to understand right, sorry if I asked too many questions (some of them probably were naive) and thank you for the help, it is very appreciated! I hope our discussion is useful not only for me.
You are very much *not* the first person to run into these issues. They date back to the very early use of things like
ADEV. The debate has been active ever since. There are a few other sub debates that also come up. The proper
definition of ADEV allows “drift correction” to be used. Just how you do drift correction is up to you. As with filtering,
drift elimination impacts the results. It also needs to be defined ( if used).,
There is actually two uses of ADEV, one is to represent the amplitude of
the various noise types, and the other is to represent the behavior of
the frequency measure. The classical use is the former, and you do not
want to fool those estimates, but for the later pre-filtering is not
only allowed, but encouraged!

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Magnus Danielson
2018-05-11 16:58:15 UTC
Permalink
Hi Dana,
I'm a bit fuzzy, then, on the definition of ADEV. I was under the
impression that one measured a series of
"phase samples" at the desired spacing, then took the RMS value of that
series, not just a single sample,
as the ADEV value.
You cannot use RMS here, as the noise does not converge as you build the
average. This was a huge issue before the Allan processing approach
essentially went for combining the average of 2-point RMS measurements,
which ends up being a subtraction of two frequency measure.
Can anybody say which it is? The RMS approach seems to make better sense
as it provides some measure
of defense against taking a sample that happens to be an outlier, yet
avoids the flaw of tending to average
the reported ADEV towards zero.
Forget about RMS here.

Cheers,
Magnus
Dana (K8YUM)
Hi
If you collect data over the entire second and average that down for a
single point, then no, your ADEV will not be correct.
There are a number of papers on this. What ADEV wants to see is a single
phase “sample” at one second spacing. This is
also at the root of how you get 10 second ADEV. You don’t average the ten
1 second data points. You throw nine data points
away and use one of them ( = you decimate the data ).
What happens if you ignore this? Your curve looks “to good”. The resultant
curve is *below* the real curve when plotted.
A quick way to demonstrate this is to do ADEV with averaged vs decimated
data ….
Bob
Hi
I have got a pair of not so bad OCXOs (Morion GK85). I did some
measurements, the results may be interested to others (sorry if not), so I
decided to post them.
I ran a set of 5minutes long counter runs (two OCXOs were measured
against each other), each point is 1sec gate frequency measurement with
different number of timestamps used in LR calculation (from 10 till 5e6).
The counter provides continuous counting. As you can see I reach the HW
limitations at 5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results
looks reasonable, the theory predicts 27ps equivalent resolution with 1e5
timestamps, also the sqrt(N) law is clearly seen on the plots. I do not
know what is the limiting factor, if it is OCXOs or some counter HW.
I know there are HW problems, some of them were identified during this
experiment. They were expectable, cause HW is still just an ugly
construction made from the boards left in the "radio junk box" from the
other projects/experiments. I am going to move to the well designed PCB
with some improvements in HW (and more or less "normal" analog frontend
with good comparator, ADCMP604 or something similar, for the "low
frequency" input). But I want to finish my initial tests, it should help
with the HW design.
Now I have some questions. As you know I am experimenting with the
counter that uses LR calculations to improve its resolution. The LR data
for each measurement is collected during the gate time only, also
measurements are continuous. Will the ADEV be calculated correctly from
such measurements? I understand that any averaging for the time window
larger then single measurement time will spoil the ADEV plot. Also I
understand that using LR can result in incorrect frequency estimate for the
signal with large drift (should not be a problem for the discussed
measurements, at least for the numbers we are talking about).
Does the ADEV plots I got looks reasonable for the used "mid range"
OCXOs (see the second plot for the long run test)?
BTW, I see I can interface GPS module to my counter without additional
HW (except the module itself, do not worry it will not be another DIY
GPSDO, probably :-) ). I will try to do it. The initial idea is not try to
lock the reference OCXO to GPS, instead I will just measure GPS against REF
and will make corrections using pure math in SW. I see some advantages with
such design - no hi resolution DAC, reference for DAC, no loop, no
additional hardware at all - only the GPS module and software :) (it is in
the spirit of this project)... Of cause I will not have reference signal
that can be used outside the counter, I think I can live with it. It worth
to do some experiments.
Best!
Oleg UR3IQO
<Снимок экрана (1148).png><Снимок экрана (1150).png><Снимок экрана
(1149).png>_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Tom Van Baak
2018-04-27 14:27:41 UTC
Permalink
Post by Hal Murray
That might be an interesting way to analyze TICC data. It would work
better/faster if you used a custom divider to trigger the TICC as fast as it
can print rather than using the typical PPS.
Hi Hal,

Exactly correct. For more details see this posting:
https://www.febo.com/pipermail/time-nuts/2014-December/089787.html

That's one reason for the 1/10/100/1000 Hz PIC divider chips -- to make measurements at 100PPS instead of 1PPS.

JohnA could have designed the TAPR/TICC to be a traditional two-input A->B Time Interval Counter (TIC) like any counter you see from hp. But instead, with the same hardware, he implemented it as a Time Stamping Counter (TSC) pair. This gives you A->B as time interval if you want, but it also gives you REF->A and REF->B as time stamps as well.

You can operate the two channels separately if you want, that is, two completely different DUT measurements at the same time, as if you had two TIC's for the price of one. Or you can run them synchronized so that you are effectively making three simultaneous measurements: DUTa vs. REF vs. DUTb vs.

This paper is a must read:

Modern frequency counting principles
http://www.npl.co.uk/upload/pdf/20060209_t-f_johansson_1.pdf

See also:

New frequency counting principle improves resolution
http://tycho.usno.navy.mil/ptti/2005papers/paper67.pdf

Continuous Measurements with Zero Dead-Time in CNT-91
http://www.testmart.com/webdata/mfr_promo/whitepaper_cnt91.pdf

Time & Frequency Measurements for Oscillator Manufacturers using CNT-91
http://www.testmart.com/webdata/mfr_promo/whitepaper_osc%20manu_cnt91.pdf

Some comments and links to HP's early time stamping chip:
https://www.febo.com/pipermail/time-nuts/2017-November/107528.html

/tvb

_______________________________________________
time-nuts mailing list -- time-***@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Loading...