Matching Items (23)
Filtering by

Clear all filters

150382-Thumbnail Image.png
Description
This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate

This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate between each two users. The whole trust checking process is divided into two steps: local checking and remote checking. Local checking directly contacts the email server to calculate the trust rate based on user's own email communication history. Remote checking is a distributed computing process to get help from user's social network friends and built the trust rate together. The email-based trust model is built upon a cloud computing framework called MobiCloud. Inside MobiCloud, each user occupies a virtual machine which can directly communicate with others. Based on this feature, the distributed trust model is implemented as a combination of local analysis and remote analysis in the cloud. Experiment results show that the trust evaluation model can give accurate trust rate even in a small scale social network which does not have lots of social connections. With this trust model, the security in both social network services and email communication could be improved.
ContributorsZhong, Yunji (Author) / Huang, Dijiang (Thesis advisor) / Dasgupta, Partha (Committee member) / Syrotiuk, Violet (Committee member) / Arizona State University (Publisher)
Created2011
149906-Thumbnail Image.png
Description
In this thesis, I investigate the C*-algebras and related constructions that arise from combinatorial structures such as directed graphs and their generalizations. I give a complete characterization of the C*-correspondences associated to directed graphs as well as results about obstructions to a similar characterization of these objects for generalizations of

In this thesis, I investigate the C*-algebras and related constructions that arise from combinatorial structures such as directed graphs and their generalizations. I give a complete characterization of the C*-correspondences associated to directed graphs as well as results about obstructions to a similar characterization of these objects for generalizations of directed graphs. Viewing the higher-dimensional analogues of directed graphs through the lens of product systems, I give a rigorous proof that topological k-graphs are essentially product systems over N^k of topological graphs. I introduce a "compactly aligned" condition for such product systems of graphs and show that this coincides with the similarly-named conditions for topological k-graphs and for the associated product systems over N^k of C*-correspondences. Finally I consider the constructions arising from topological dynamical systems consisting of a locally compact Hausdorff space and k commuting local homeomorphisms. I show that in this case, the associated topological k-graph correspondence is isomorphic to the product system over N^k of C*-correspondences arising from a related Exel-Larsen system. Moreover, I show that the topological k-graph C*-algebra has a crossed product structure in the sense of Larsen.
ContributorsPatani, Nura (Author) / Kaliszewski, Steven (Thesis advisor) / Quigg, John (Thesis advisor) / Bremner, Andrew (Committee member) / Kawski, Matthias (Committee member) / Spielberg, John (Committee member) / Arizona State University (Publisher)
Created2011
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
152050-Thumbnail Image.png
Description
In 1959, Iwasawa proved that the size of the $p$-part of the class groups of a $\mathbb{Z}_p$-extension grows as a power of $p$ with exponent ${\mu}p^m+{\lambda}\,m+\nu$ for $m$ sufficiently large. Broadly, I construct conditions to verify if a given $m$ is indeed sufficiently large. More precisely, let $CG_m^i$ (class group)

In 1959, Iwasawa proved that the size of the $p$-part of the class groups of a $\mathbb{Z}_p$-extension grows as a power of $p$ with exponent ${\mu}p^m+{\lambda}\,m+\nu$ for $m$ sufficiently large. Broadly, I construct conditions to verify if a given $m$ is indeed sufficiently large. More precisely, let $CG_m^i$ (class group) be the $\epsilon_i$-eigenspace component of the $p$-Sylow subgroup of the class group of the field at the $m$-th level in a $\mathbb{Z}_p$-extension; and let $IACG^i_m$ (Iwasawa analytic class group) be ${\mathbb{Z}_p[[T]]/((1+T)^{p^m}-1,f(T,\omega^{1-i}))}$, where $f$ is the associated Iwasawa power series. It is expected that $CG_m^i$ and $IACG^i_m$ be isomorphic, providing us with a powerful connection between algebraic and analytic techniques; however, as of yet, this isomorphism is unestablished in general. I consider the existence and the properties of an exact sequence $$0\longrightarrow\ker{\longrightarrow}CG_m^i{\longrightarrow}IACG_m^i{\longrightarrow}\textrm{coker}\longrightarrow0.$$ In the case of a $\mathbb{Z}_p$-extension where the Main Conjecture is established, there exists a pseudo-isomorphism between the respective inverse limits of $CG_m^i$ and $IACG_m^i$. I consider conditions for when such a pseudo-isomorphism immediately gives the existence of the desired exact sequence, and I also consider work-around methods that preserve cardinality for otherwise. However, I primarily focus on constructing conditions to verify if a given $m$ is sufficiently large that the kernel and cokernel of the above exact sequence have become well-behaved, providing similarity of growth both in the size and in the structure of $CG_m^i$ and $IACG_m^i$; as well as conditions to determine if any such $m$ exists. The primary motivating idea is that if $IACG_m^i$ is relatively easy to work with, and if the relationship between $CG_m^i$ and $IACG_m^i$ is understood; then $CG_m^i$ becomes easier to work with. Moreover, while the motivating framework is stated concretely in terms of the cyclotomic $\mathbb{Z}_p$-extension of $p$-power roots of unity, all results are generally applicable to arbitrary $\mathbb{Z}_p$-extensions as they are developed in terms of Iwasawa-Theory-inspired, yet abstracted, algebraic results on maps between inverse limits.
ContributorsElledge, Shawn Michael (Author) / Childress, Nancy (Thesis advisor) / Bremner, Andrew (Committee member) / Fishel, Susanna (Committee member) / Jones, John (Committee member) / Paupert, Julien (Committee member) / Arizona State University (Publisher)
Created2013
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151231-Thumbnail Image.png
Description
Gray codes are perhaps the best known structures for listing sequences of combinatorial objects, such as binary strings. Simply defined as a minimal change listing, Gray codes vary greatly both in structure and in the types of objects that they list. More specific types of Gray codes are universal cycles

Gray codes are perhaps the best known structures for listing sequences of combinatorial objects, such as binary strings. Simply defined as a minimal change listing, Gray codes vary greatly both in structure and in the types of objects that they list. More specific types of Gray codes are universal cycles and overlap sequences. Universal cycles are Gray codes on a set of strings of length n in which the first n-1 letters of one object are the same as the last n-1 letters of its predecessor in the listing. Overlap sequences allow this overlap to vary between 1 and n-1. Some of our main contributions to the areas of Gray codes and universal cycles include a new Gray code algorithm for fixed weight m-ary words, and results on the existence of universal cycles for weak orders on [n]. Overlap cycles are a relatively new structure with very few published results. We prove the existence of s-overlap cycles for k-permutations of [n], which has been an open research problem for several years, as well as constructing 1- overlap cycles for Steiner triple and quadruple systems of every order. Also included are various other results of a similar nature covering other structures such as binary strings, m-ary strings, subsets, permutations, weak orders, partitions, and designs. These listing structures lend themselves readily to some classes of combinatorial objects, such as binary n-tuples and m-ary n-tuples. Others require more work to find an appropriate structure, such as k-subsets of an n-set, weak orders, and designs. Still more require a modification in the representation of the objects to fit these structures, such as partitions. Determining when and how we can fit these sets of objects into our three listing structures is the focus of this dissertation.
ContributorsHoran, Victoria E (Author) / Hurlbert, Glenn H. (Thesis advisor) / Czygrinow, Andrzej (Committee member) / Fishel, Susanna (Committee member) / Colbourn, Charles (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
149583-Thumbnail Image.png
Description
In Iwasawa theory, one studies how an arithmetic or geometric object grows as its field of definition varies over certain sequences of number fields. For example, let $F/\mathbb{Q}$ be a finite extension of fields, and let $E:y^2 = x^3 + Ax + B$ with $A,B \in F$ be an elliptic

In Iwasawa theory, one studies how an arithmetic or geometric object grows as its field of definition varies over certain sequences of number fields. For example, let $F/\mathbb{Q}$ be a finite extension of fields, and let $E:y^2 = x^3 + Ax + B$ with $A,B \in F$ be an elliptic curve. If $F = F_0 \subseteq F_1 \subseteq F_2 \subseteq \cdots F_\infty = \bigcup_{i=0}^\infty F_i$, one may be interested in properties like the ranks and torsion subgroups of the increasing family of curves $E(F_0) \subseteq E(F_1) \subseteq \cdots \subseteq E(F_\infty)$. The main technique for studying this sequence of curves when $\Gal(F_\infty/F)$ has a $p$-adic analytic structure is to use the action of $\Gal(F_n/F)$ on $E(F_n)$ and the Galois cohomology groups attached to $E$, i.e. the Selmer and Tate-Shafarevich groups. As $n$ varies, these Galois actions fit into a coherent family, and taking a direct limit one obtains a short exact sequence of modules $$0 \longrightarrow E(F_\infty) \otimes(\mathbb{Q}_p/\mathbb{Z}_p) \longrightarrow \Sel_E(F_\infty)_p \longrightarrow \Sha_E(F_\infty)_p \longrightarrow 0 $$ over the profinite group algebra $\mathbb{Z}_p[[\Gal(F_\infty/F)]]$. When $\Gal(F_\infty/F) \cong \mathbb{Z}_p$, this ring is isomorphic to $\Lambda = \mathbb{Z}_p[[T]]$, and the $\Lambda$-module structure of $\Sel_E(F_\infty)_p$ and $\Sha_E(F_\infty)_p$ encode all the information about the curves $E(F_n)$ as $n$ varies. In this dissertation, it will be shown how one can classify certain finitely generated $\Lambda$-modules with fixed characteristic polynomial $f(T) \in \mathbb{Z}_p[T]$ up to isomorphism. The results yield explicit generators for each module up to isomorphism. As an application, it is shown how to identify the isomorphism class of $\Sel_E(\mathbb{Q_\infty})_p$ in this explicit form, where $\mathbb{Q}_\infty$ is the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$, and $E$ is an elliptic curve over $\mathbb{Q}$ with good ordinary reduction at $p$, and possessing the property that $E(\mathbb{Q})$ has no $p$-torsion.
ContributorsFranks, Chase (Author) / Childress, Nancy (Thesis advisor) / Barcelo, Helene (Committee member) / Bremner, Andrew (Committee member) / Jones, John (Committee member) / Spielberg, Jack (Committee member) / Arizona State University (Publisher)
Created2011
149335-Thumbnail Image.png
Description
ABSTRACT This thesis attempts to answer two questions based upon the historical observation that 1^2 +2^2 +· · ·+24^2 = 70^2. The first question considers changing the starting number of the left hand side of the equation from 1 to any perfect square in the range 1 to 10000. On

ABSTRACT This thesis attempts to answer two questions based upon the historical observation that 1^2 +2^2 +· · ·+24^2 = 70^2. The first question considers changing the starting number of the left hand side of the equation from 1 to any perfect square in the range 1 to 10000. On this question, I attempt to determine which perfect square to end the left hand side of the equation with so that the right hand side of the equation is a perfect square. Mathematically, Question #1 can be written as follows: Given a positive integer r with 1 less than or equal to r less than or equal to 100, find all nontrivial solutions (N,M), if any, of r^2+(r+1)^2+···+N^2 =M^2 with N,M elements of Z+. The second question considers changing the number of terms on the left hand side of the equation to any fixed whole number in the range 1 to 100. On this question, I attempt to determine which perfect square to start the left hand side of the equation with so that the right hand side of the equation is a perfect square. Mathematically, Question #2 can be written as follows: Given a positive integer r with 1 less than or equal to r less than or equal to 100, find all solutions (u, v), if any, of u^2 +(u+1)^2 +(u+2)^2 +···+(u+r-1)^2 =v^2 with u,v elements of Z+. The two questions addressed by this thesis have been on the minds of many mathematicians for over 100 years. As a result of their efforts to obtain answers to these questions, a lot of mathematics has been developed. This research was done to organize that mathematics into one easily accessible place. My findings on Question #1 can hopefully be used by future mathematicians in order to completely answer Question #1. In addition, my findings on Question #2 can hopefully be used by future mathematicians as they attempt to answer Question #2 for values of r greater than 100.
ContributorsRoth, Sanford Gary (Author) / Bremner, Andrew (Thesis advisor) / Childress, Nancy E (Committee member) / Jones, John W. (Committee member) / Arizona State University (Publisher)
Created2010
131781-Thumbnail Image.png
Description
In the modern world with the ever growing importance of technology, the challenge of information security is of increasing importance. Cryptographic algorithms used to encode information stored and transmitted over the internet must be constantly improving as methodology and technology for cyber attacks improve. RSA and Elliptic Curve cryptosystems such

In the modern world with the ever growing importance of technology, the challenge of information security is of increasing importance. Cryptographic algorithms used to encode information stored and transmitted over the internet must be constantly improving as methodology and technology for cyber attacks improve. RSA and Elliptic Curve cryptosystems such as El Gamal or Diffie-Hellman key exchange are often used as secure asymmetric cryptographic algorithms. However, quantum computing threatens the security of these algorithms. A relatively new algorithm that is based on isogenies between elliptic curves has been proposed in response to this threat. The new algorithm is thought to be quantum resistant as it uses isogeny walks instead of point addition to generate a shared secret key. In this paper we will analyze this algorithm in an attempt to understand the theory behind it. A main goal is to create isogeny graphs to visualize degree 2 and 3 isogeny walks that can be taken between supersingular elliptic curves over small fields to get a better understanding of the workings and security of the algorithm.
ContributorsLoucks, Sara J (Author) / Jones, John (Thesis director) / Bremner, Andrew (Committee member) / Computer Science and Engineering Program (Contributor) / School of Film, Dance and Theatre (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
134742-Thumbnail Image.png
Description
Pierre de Fermat, an amateur mathematician, set upon the mathematical world a challenge so difficult it took 357 years to prove. This challenge, known as Fermat's Last Theorem, has many different ways of being expressed, but it simply states that for $n > 2$, the equation $x^n + y^n =

Pierre de Fermat, an amateur mathematician, set upon the mathematical world a challenge so difficult it took 357 years to prove. This challenge, known as Fermat's Last Theorem, has many different ways of being expressed, but it simply states that for $n > 2$, the equation $x^n + y^n = z^n$ has no nontrivial solution. The first set of attempts of proofs came from mathematicians using the essentially elementary tools provided by number theory: the notable mathematicians were Leonhard Euler, Sophie Germain and Ernst Kummer. Kummer was the final mathematician to try to use essentially elementary number theory as the basis for his proof and even exclaimed that Fermat's Last Theorem could not be solved using number theory alone; Kummer claimed that greater mathematics would have to be developed in order to prove this ever-growing mystery. The 20th century arrives and two Japanese mathematicians, Goro Shimura and Yutaka Taniyama, shock the world by claiming two highly distinct branches of mathematics, elliptic curves and modular forms, were in fact one and the same. Gerhard Frey then took this claim to the extreme by stating that this claim, the Taniyama-Shimura conjecture, was the necessary link to finally prove Fermat's Last Theorem was true. Frey's statement was then validated by Kenneth Ribet by proving that the Frey Curve could not indeed be a modular form. The final piece of the puzzle placed, the English mathematician Andrew Wiles embarked on a 7 year journey to prove Fermat's Last Theorem as now the the proof of the theorem rested in his area of expertise, that being elliptic curves. In 1994, Wiles published his complete proof of Fermat's Last Theorem, putting an end to one of mathematics' greatest mysteries.
ContributorsBoyadjian, Hoveeg Krikor (Author) / Bremner, Andrew (Thesis director) / Jones, John (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12