The Compact X-ray Light Source is an x-ray source at ASU that allows scientists to study the structures and dynamics of matter on an atomic scale. The radio frequency system that provides the power to accelerate electrons in the Compact X-ray Light Source must operate with a high degree of…
The Compact X-ray Light Source is an x-ray source at ASU that allows scientists to study the structures and dynamics of matter on an atomic scale. The radio frequency system that provides the power to accelerate electrons in the Compact X-ray Light Source must operate with a high degree of precision. This thesis measures the precision with which that system performs.
The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption,
and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single
photon correlations from thermal sources, such as stars, requires a combination of high
dynamic range, long integration times, and low systematics…
The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption,
and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single
photon correlations from thermal sources, such as stars, requires a combination of high
dynamic range, long integration times, and low systematics in the photon detection
and time tagging system. The continuous nature of the measurements and the need
for highly accurate timing resolution requires a customized time-to-digital converter
(TDC). A custom built, two-channel, field programmable gate array (FPGA)-based
TDC capable of continuously time tagging single photons with sub clock cycle timing
resolution was characterized. Auto-correlation and cross-correlation measurements
were used to constrain spurious systematic effects in the pulse count data as a function
of system variables. These variables included, but were not limited to, incident
photon count rate, incoming signal attenuation, and measurements of fixed signals.
Additionally, a generalized likelihood ratio test using maximum likelihood estimators
(MLEs) was derived as a means to detect and estimate correlated photon signal
parameters. The derived GLRT was capable of detecting correlated photon signals in
a laboratory setting with a high degree of statistical confidence. A proof is presented
in which the MLE for the amplitude of the correlated photon signal is shown to be the
minimum variance unbiased estimator (MVUE). The fully characterized TDC was used
in preliminary measurements of astronomical sources using ground based telescopes.
Finally, preliminary theoretical groundwork is established for the deep space optical
communications system of the proposed Breakthrough Starshot project, in which
low-mass craft will travel to the Alpha Centauri system to collect scientific data from
Proxima B. This theoretical groundwork utilizes recent and upcoming space based
optical communication systems as starting points for the Starshot communication
system.
Researchers have observed that the frequencies of leading digits in many man-made and naturally occurring datasets follow a logarithmic curve, with digits that start with the number 1 accounting for 30% of all numbers in the dataset and digits that start with the number 9 accounting for 5% of all…
Researchers have observed that the frequencies of leading digits in many man-made and naturally occurring datasets follow a logarithmic curve, with digits that start with the number 1 accounting for 30% of all numbers in the dataset and digits that start with the number 9 accounting for 5% of all numbers in the dataset. This phenomenon, known as Benford's Law, is highly repeatable and appears in lists of numbers from electricity bills, stock prices, tax returns, house prices, death rates, lengths of rivers, and naturally occurring images. This paper will demonstrate that human speech spectra also follow Benford's Law. This observation is used to motivate a new set of features that can be efficiently extracted from speech and demonstrate that these features can be used to classify between human speech and synthetic speech.
In this research, I surveyed existing methods of characterizing Epilepsy from Electroencephalogram (EEG) data, including the Random Forest algorithm, which was claimed by many researchers to be the most effective at detecting epileptic seizures [7]. I observed that although many papers claimed a detection of >99% using Random Forest, it…
In this research, I surveyed existing methods of characterizing Epilepsy from Electroencephalogram (EEG) data, including the Random Forest algorithm, which was claimed by many researchers to be the most effective at detecting epileptic seizures [7]. I observed that although many papers claimed a detection of >99% using Random Forest, it was not specified “when” the detection was declared within the 23.6 second interval of the seizure event. In this research, I created a time-series procedure to detect the seizure as early as possible within the 23.6 second epileptic seizure window and found that the detection is effective (> 92%) as early as the first few seconds of the epileptic episode. I intend to use this research as a stepping stone towards my upcoming Masters thesis research where I plan to expand the time-series detection mechanism to the pre-ictal stage, which will require a different dataset.