Filtering by
- All Subjects: Computer Science
- All Subjects: Communication
- Creators: Arizona State University
- Creators: Ye, Jieping
This thesis explores methods to augment the automated spatial classification by utilizing interactive machine learning as part of the cluster creation step. First, this thesis explores the design space for spatiotemporal analysis through the development of a comprehensive data wrangling and exploratory data analysis platform. Second, this system is augmented with a novel method for evaluating the visual impact of edge cases for multivariate geographic projections. Finally, system features and functionality are demonstrated through a series of case studies, with key features including similarity analysis, multivariate clustering, and novel visual support for cluster comparison.
First, I propose a surface fluid registration system, which extends the traditional image fluid registration to surfaces. With surface conformal parameterization, the complexity of the proposed registration formula has been greatly reduced, compared to prior methods. Inverse consistency is also incorporated to drive a symmetric correspondence between surfaces. After registration, the multivariate tensor-based morphometry (mTBM) is computed to measure local shape deformations. The algorithm was applied to study hippocampal atrophy associated with Alzheimer's disease (AD).
Next, I propose a ventricular surface registration algorithm based on hyperbolic Ricci flow, which computes a global conformal parameterization for each ventricular surface without introducing any singularity. Furthermore, in the parameter space, unique hyperbolic geodesic curves are introduced to guide consistent correspondences across subjects, a technique called geodesic curve lifting. Tensor-based morphometry (TBM) statistic is computed from the registration to measure shape changes. This algorithm was applied to study ventricular enlargement in mild cognitive impatient (MCI) converters.
Finally, a new shape index, the hyperbolic Wasserstein distance, is introduced. This algorithm computes the Wasserstein distance between general topological surfaces as a shape similarity measure of different surfaces. It is based on hyperbolic Ricci flow, hyperbolic harmonic map, and optimal mass transportation map, which is extended to hyperbolic space. This method fills a gap in the Wasserstein distance study, where prior work only dealt with images or genus-0 closed surfaces. The algorithm was applied in an AD vs. control cortical shape classification study and achieved promising accuracy rate.
The research described in this dissertation consists of four main parts. First is a new circuit architecture of a differential threshold logic flipflop called PNAND. The PNAND gate is an edge-triggered multi-input sequential cell whose next state function is a threshold function of its inputs. Second a new approach, called hybridization, that replaces flipflops and parts of their logic cones with PNAND cells is described. The resulting \hybrid circuit, which consists of conventional logic cells and PNANDs, is shown to have significantly less power consumption, smaller area, less standby power and less power variation.
Third, a new architecture of a field programmable array, called field programmable threshold logic array (FPTLA), in which the standard lookup table (LUT) is replaced by a PNAND is described. The FPTLA is shown to have as much as 50% lower energy-delay product compared to conventional FPGA using well known FPGA modeling tool called VPR.
Fourth, a novel clock skewing technique that makes use of the completion detection feature of the differential mode flipflops is described. This clock skewing method improves the area and power of the ASIC circuits by increasing slack on timing paths. An additional advantage of this method is the elimination of hold time violation on given short paths.
Several circuit design methodologies such as retiming and asynchronous circuit design can use the proposed threshold logic gate effectively. Therefore, the use of threshold logic flipflops in conventional design methodologies opens new avenues of research towards more energy-efficient circuits.
Besides the common feature above, communities within a social network have two unique characteristics: communities are mostly small and overlapping. Unfortunately, many traditional algorithms have difficulty recognizing these small communities (often called the resolution limit problem) as well as overlapping communities.
In this work, two enhanced community detection techniques are proposed for re-working existing community detection algorithms to find small communities in social networks. One method is to modify the modularity measure within the framework of the traditional Newman-Girvan algorithm so that more small communities can be detected. The second method is to incorporate a preprocessing step into existing algorithms by changing edge weights inside communities. Both methods help improve community detection performance while maintaining or improving computational efficiency.
A hashtag is a type of label or meta-data tag used in social networks and micro-blogging services which makes it easier for users to find messages with a specific theme or content. The context of a tweet can be defined as a set of one or more hashtags. Users often do not use hashtags to tag their tweets. This leads to the problem of missing context for tweets. To address the problem of missing hashtags, a statistical method was proposed which predicts most likely hashtags based on the social circle of an originator.
In this thesis, we propose to improve on the existing context recovery system by selectively limiting the candidate set of hashtags to be derived from the intimate circle of the originator rather than from every user in the social network of the originator. This helps in reducing the computation, increasing speed of prediction, scaling the system to originators with large social networks while still preserving most of the accuracy of the predictions. We also propose to not only derive the candidate hashtags from the social network of the originator but also derive the candidate hashtags based on the content of the tweet. We further propose to learn personalized statistical models according to the adoption patterns of different originators. This helps in not only identifying the personalized candidate set of hashtags based on the social circle and content of the tweets but also in customizing the hashtag adoption pattern to the originator of the tweet.
Psychological assessments contain important diagnostic information and are central to therapeutic service delivery. Therapists' personal biases, invalid cognitive schemas, and emotional reactions can be expressed in the language of the assessments they compose, causing clients to be cast in an unfavorable light. Logically, the opinions of subsequent therapists may then be influenced by reading these assessments, resulting in negative attitudes toward clients, inaccurate diagnoses, adverse experiences for clients, and poor therapeutic outcomes. However, little current research exists that addresses this issue. This study analyzed the degree to which strength-based, deficit-based, and neutral language used in psychological assessments influenced the opinions of counselor trainees (N= 116). It was hypothesized that participants assigned to each type of assessment would describe the client using adjectives that closely conformed to the language used in the assessment they received. The hypothesis was confirmed (p = .000), indicating significant mean differences between all three groups. Limitations and implications of the study were identified and suggestions for further research were discussed.
This study investigated how young adults communicate their decision to religiously disaffiliate to their parents. Both the context in which the religious disaffiliation conversation took place and the communicative behaviors used during the religious disaffiliation conversation were studied. Research questions and hypotheses were guided by Family Communication Patterns Theory and Face Negotiation Theory. A partially mixed sequential quantitative dominate status design was employed to answer the research questions and hypotheses. Interviews were conducted with 10 young adults who had either disaffiliated from the Church of Jesus Christ of Latter-day Saints or the Watch Tower Society. During the interviews, the survey instrument was refined; ultimately, it was completed by 298 religiously disaffiliated young adults. For the religious disaffiliation conversation’s context, results indicate that disaffiliated Jehovah’s Witnesses had higher conformity orientations than disaffiliated Latter-day Saints. Additionally, disaffiliated Jehovah’s Witnesses experienced more stress than disaffiliated Latter-day Saints. Planning the conversation in advance did lead to the disaffiliation conversation being less stressful for young adults. Furthermore, the analysis found that having three to five conversations reduced stress significantly more than having one or two conversations. For the communicative behaviors during the religious disaffiliation conversation, few differences were found in regard to prevalence of the facework behaviors between the two groups. Of the 14 facework behaviors, four were used more often by disaffiliated JW than disaffiliated LDS—abuse, passive aggressive, pretend, and defend self. In terms of effectiveness, the top five facework behaviors were talk about the problem, consider the other, have a private discussion, remain calm, and defend self. Overall, this study begins the conversation on how religious disaffiliation occurs between young adults and their parents and extends Family Communication Patterns Theory and Face Negotiation Theory to a new context.
Military couples' communication during deployment: a proposed expansion of affection exchange theory
In this thesis, we improve the intent classification and slot filling in the virtual voice agents by automatic data augmentation. Spoken Language Understanding systems face the issue of data sparsity. The reason behind this is that it is hard for a human-created training sample to represent all the patterns in the language. Due to the lack of relevant data, deep learning methods are unable to generalize the Spoken Language Understanding model. This thesis expounds a way to overcome the issue of data sparsity in deep learning approaches on Spoken Language Understanding tasks. Here we have described the limitations in the current intent classifiers and how the proposed algorithm uses existing knowledge bases to overcome those limitations. The method helps in creating a more robust intent classifier and slot filling system.