Matching Items (1,326)
Filtering by

Clear all filters

157369-Thumbnail Image.png
Description
A Chief Audit Executive (CAE) is the leader of a company’s internal audit function. Because there is no mandated disclosure requirement for the internal audit structure, little is understood about the influence of a CAE on a company. Following the logic that a CAE disclosed in SEC filings is more

A Chief Audit Executive (CAE) is the leader of a company’s internal audit function. Because there is no mandated disclosure requirement for the internal audit structure, little is understood about the influence of a CAE on a company. Following the logic that a CAE disclosed in SEC filings is more influential in a company’s oversight function, I identify an influential CAE using the disclosure of the role. I then examine the association between an influential CAE and monitoring outcomes. Using data hand collected from SEC filings for S&P 1500 companies from 2004 to 2015, I find companies that have an influential CAE are generally larger, older, and have a larger corporate board. More importantly, I find that an influential CAE in NYSE-listed companies is associated with higher internal control quality. This association is stronger for companies that reference a CAE’s direct interaction with the audit committee. This study provides an initial investigation into a common, but little understood position in corporate oversight.
ContributorsZhang, Wei (Author) / Lamoreaux, Phillip (Thesis advisor) / Kaplan, Steve (Committee member) / Li, Yinghua (Committee member) / Arizona State University (Publisher)
Created2019
156468-Thumbnail Image.png
Description
With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and

With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced in order to be placed on edge devices, but they may loose their capability and may not generalize and perform well compared to large models. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking.

The purpose of this work is to provide an extensive study on the performance (both in terms of accuracy and convergence speed) of knowledge transfer, considering different student-teacher architectures, datasets and different techniques for transferring knowledge from teacher to student.

A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact. For example, a smaller and shorter network, trained with knowledge transfer on Caltech 101 achieved a significant improvement of 7.36\% in the accuracy and converges 16 times faster compared to the same network trained without knowledge transfer. On the other hand, smaller network which is thinner than the teacher network performed worse with an accuracy drop of 9.48\% on Caltech 101, even with utilization of knowledge transfer.
ContributorsSistla, Ragini (Author) / Zhao, Ming (Thesis advisor, Committee member) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2018
156551-Thumbnail Image.png
Description
This study investigates the relation between credit supply competition among banks and their clients’ conditional accounting conservatism (i.e., asymmetric timely loss recognition). The Interstate Banking and Branching Efficiency Act (IBBEA) of 1994 permits banks and bank holding companies to expand their business across state lines, introducing a positive shock to

This study investigates the relation between credit supply competition among banks and their clients’ conditional accounting conservatism (i.e., asymmetric timely loss recognition). The Interstate Banking and Branching Efficiency Act (IBBEA) of 1994 permits banks and bank holding companies to expand their business across state lines, introducing a positive shock to credit supply competition in the banking industry. The increase in credit supply competition weakens banks’ bargaining power in the negotiation process, which in turn may weaken their ability to demand conservative financial reporting from borrowers. Consistent with this prediction, results show that firms report less conservatively after the IBBEA is passed in their headquartered states. The effect of the IBBEA on conditional conservatism is particularly stronger for firms in states with a greater increase in competition among banks, firms whose operations are more concentrated in their headquarter states, firms with greater financial constraints, and firms subject to less external monitoring. Robustness tests confirm that the observed decline in conditional conservatism is causally related to the passage of IBBEA. Overall, this study highlights the impact of credit supply competition on financial reporting practices.
ContributorsHuang, Wei (Author) / Li, Yinghua (Thesis advisor) / Huang, Xiaochuan (Committee member) / Kaplan, Steve (Committee member) / Arizona State University (Publisher)
Created2018
155951-Thumbnail Image.png
Description
Recent trends in big data storage systems show a shift from disk centric models to memory centric models. The primary challenges faced by these systems are speed, scalability, and fault tolerance. It is interesting to investigate the performance of these two models with respect to some big data applications. This

Recent trends in big data storage systems show a shift from disk centric models to memory centric models. The primary challenges faced by these systems are speed, scalability, and fault tolerance. It is interesting to investigate the performance of these two models with respect to some big data applications. This thesis studies the performance of Ceph (a disk centric model) and Alluxio (a memory centric model) and evaluates whether a hybrid model provides any performance benefits with respect to big data applications. To this end, an application TechTalk is created that uses Ceph to store data and Alluxio to perform data analytics. The functionalities of the application include offline lecture storage, live recording of classes, content analysis and reference generation. The knowledge base of videos is constructed by analyzing the offline data using machine learning techniques. This training dataset provides knowledge to construct the index of an online stream. The indexed metadata enables the students to search, view and access the relevant content. The performance of the application is benchmarked in different use cases to demonstrate the benefits of the hybrid model.
ContributorsNAGENDRA, SHILPA (Author) / Huang, Dijiang (Thesis advisor) / Zhao, Ming (Committee member) / Maciejewski, Ross (Committee member) / Chung, Chun-Jen (Committee member) / Arizona State University (Publisher)
Created2017
156685-Thumbnail Image.png
Description
Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between

Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the generality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user experience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts.

The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead. This is achieved by using lightweight virtualization based on Linux namespaces. FlexICon uses two mechanisms to reduce user mistakes: 1) a trusted file dialog for selecting files for opening and launching it in the appropriate containers, and 2) a secure URL redirection mechanism that detects the user’s intent and opens the URL in the proper container. FlexICon also provides a language to specify the access constraints that should be enforced by various containers.

The second solution called Auto-FBI, deals with web-based attacks by creating multiple instances of the browser and providing mechanisms for switching between the browser instances. The prototype implementation for Firefox and Chrome uses system call interposition to control the browser’s network access. Auto-FBI can be ported to other platforms easily due to simple design and the ubiquity of system call interposition methods on all major desktop platforms.
ContributorsZohrevandi, Mohsen (Author) / Bazzi, Rida A (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Doupe, Adam (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2018
157094-Thumbnail Image.png
Description当前,上市公司的盈余管理问题已是我国资本市场中普遍存在的突出问题。一般来说,一些企业为了满足资本市场对于上市、增发等条件的要求,以及为有效推动企业的并购、重组等行为的顺利实现,甚至为了谋求公司管理层的个别利益,往往运用盈余管理等举措实施公司财报及关键指标的粉饰修正,让不知情的股民蒙受一定的损失。普遍分析显示,我国股市中民营企业比其他企业遭遇的问题和压力更多、更大、更突出,因此民营企业从客观上来说拥有更强的盈余管理动机。而从当前我国资本市场的实际情况来看,我国相关专家学者对盈余管理的系统性深入研究,一般都瞄准了上市企业群体或持续亏损企业,对盈余管理的研究不系统、不全面、不深入,这将对我国进一步提升盈余管理监管水平构成一定不利影响。当前,由于我国民企在自身管理及发展动力方面的特殊性,我国民企的管理、盈余管理特点和国外上市公司还存在着很大的不同,进一步深入研究我国民企上市公司自身管理方面的突出特点,以及其对企业盈余管理等方面的深层次影响,有助于监管层对症下药,更有针对性地研究出台全新的监管措施,进一步提升管理水平。这还可以为公司发展的决策层及相关会计信息使用人员提供一定的决策参考, 因此其拥有十分重要的意义。

本文首先认真总结分析了有关上市企业治理结构和盈余管理等方面的历史文献资料,依托当前资本市场上普遍运用的委托代理、内部人控制和契约等理论,系统研究了我国民企上市公司在自身治理结构方面的突出特征以及其对盈余管理方面所构成影响的深层次原理。在此基础上,本文通过2015-2017年我国上市企业数据,基于截面Jones模型对民营企业和非民营企业盈余管理程度进行测算和比较分析,发现民营企业盈余管理程度更高;从四个层面系统研究民企公司自身的治理结构突出特点,设立回归模型论证了民营企业独特的公司治理结构特征对盈余管理程度确实会产生影响;最后,本文进一步利用修正的费尔萨姆一奥尔森估价模型对民营上市公司盈余管理有公司价值的关系进行了验证,发现两者具有显著相关性。
ContributorsChen, Hui (Author) / Shen, Wei (Thesis advisor) / Chang, Chun (Thesis advisor) / Huang, Xiaochuan (Committee member) / Arizona State University (Publisher)
Created2019
156948-Thumbnail Image.png
Description
The Internet of Things ecosystem has spawned a wide variety of embedded real-time systems that complicate the identification and resolution of bugs in software. The methods of concurrent checkpoint provide a means to monitor the application state with the ability to replay the execution on like hardware and software,

The Internet of Things ecosystem has spawned a wide variety of embedded real-time systems that complicate the identification and resolution of bugs in software. The methods of concurrent checkpoint provide a means to monitor the application state with the ability to replay the execution on like hardware and software, without holding off and delaying the execution of application threads. In this thesis, it is accomplished by monitoring physical memory of the application using a soft-dirty page tracker and measuring the various types of overhead when employing concurrent checkpointing. The solution presented is an advancement of the Checkpoint and Replay In Userspace (CRIU) thereby eliminating the large stalls and parasitic operation for each successive checkpoint. Impact and performance is measured using the Parsec 3.0 Benchmark suite and 4.11.12-rt16+ Linux kernel on a MinnowBoard Turbot Quad-Core board.
ContributorsPrinke, Michael L (Author) / Lee, Yann-Hang (Thesis advisor) / Shrivastava, Aviral (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2018
156945-Thumbnail Image.png
Description
Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency.

One of the major bottlenecks for existing blockchain technologies is fast

Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency.

One of the major bottlenecks for existing blockchain technologies is fast block propagation. A faster block propagation enables a miner to reach a majority of the network within a time constraint and therefore leading to a lower orphan rate and better profitability. In order to attain a throughput that could compete with the current state of the art transaction processing, while also keeping the block intervals same as today, a 24.3 Gigabyte block will be required every 10 minutes with an average transaction size of 500 bytes, which translates to 48600000 transactions every 10 minutes or about 81000 transactions per second.

In order to synchronize such large blocks faster across the network while maintain- ing consensus by keeping the orphan rate below 50%, the thesis proposes to aggregate partial block data from multiple nodes using digital fountain codes. The advantages of using a fountain code is that all connected peers can send part of data in an encoded form. When the receiving peer has enough data, it then decodes the information to reconstruct the block. Along with them sending only part information, the data can be relayed over UDP, instead of TCP, improving upon the speed of propagation in the current blockchains. Fountain codes applied in this research are Raptor codes, which allow construction of infinite decoding symbols. The research, when applied to blockchains, increases success rate of block delivery on decode failures.
ContributorsChawla, Nakul (Author) / Boscovic, Dragan (Thesis advisor) / Candan, Kasim S (Thesis advisor) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2018
133359-Thumbnail Image.png
Description
The current trend of interconnected devices, or the internet of things (IOT) has led to the popularization of single board computers (SBC). This is primarily due to their form-factor and low price. This has led to unique networks of devices that can have unstable network connections and minimal processing power.

The current trend of interconnected devices, or the internet of things (IOT) has led to the popularization of single board computers (SBC). This is primarily due to their form-factor and low price. This has led to unique networks of devices that can have unstable network connections and minimal processing power. Many parallel program- ming libraries are intended for use in high performance computing (HPC) clusters. Unlike the IOT environment described, HPC clusters will in general look to obtain very consistent network speeds and topologies. There are a significant number of software choices that make up what is referred to as the HPC stack or parallel processing stack. My thesis focused on building an HPC stack that would run on the SCB computer name the Raspberry Pi. The intention in making this Raspberry Pi cluster is to research performance of MPI implementations in an IOT environment, which had an impact on the design choices of the cluster. This thesis is a compilation of my research efforts in creating this cluster as well as an evaluation of the software that was chosen to create the parallel processing stack.
ContributorsO'Meara, Braedon Richard (Author) / Meuth, Ryan (Thesis director) / Dasgupta, Partha (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133381-Thumbnail Image.png
Description
This thesis discusses three recent optimization problems that seek to reduce disease spread on arbitrary graphs by deleting edges, and it discusses three approximation algorithms developed for these problems. Important definitions are presented including the Linear Threshold and Triggering Set models and the set function properties of submodularity and monotonicity.

This thesis discusses three recent optimization problems that seek to reduce disease spread on arbitrary graphs by deleting edges, and it discusses three approximation algorithms developed for these problems. Important definitions are presented including the Linear Threshold and Triggering Set models and the set function properties of submodularity and monotonicity. Also, important results regarding the Linear Threshold model and computation of the influence function are presented along with proof sketches. The three main problems are formally presented, and NP-hardness results along with proof sketches are presented where applicable. The first problem seeks to reduce spread of infection over the Linear Threshold process by making use of an efficient tree data structure. The second problem seeks to reduce the spread of infection over the Linear Threshold process while preserving the PageRank distribution of the input graph. The third problem seeks to minimize the spectral radius of the input graph. The algorithms designed for these problems are described in writing and with pseudocode, and their approximation bounds are stated along with time complexities. Discussion of these algorithms considers how these algorithms could see real-world use. Challenges and the ways in which these algorithms do or do not overcome them are noted. Two related works, one which presents an edge-deletion disease spread reduction problem over a deterministic threshold process and the other which considers a graph modification problem aimed at minimizing worst-case disease spread, are compared with the three main works to provide interesting perspectives. Furthermore, a new problem is proposed that could avoid some issues faced by the three main problems described, and directions for future work are suggested.
ContributorsStanton, Andrew Warren (Author) / Richa, Andrea (Thesis director) / Czygrinow, Andrzej (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05