Matching Items (239)
149502-Thumbnail Image.png
Description
Oxidative aging is an important factor in the long term performance of asphalt pavements. Oxidation and the associated stiffening can lead to cracking, which in turn can lead to the functional and structural failure of the pavement system. Therefore, a greater understanding of the nature of oxidative aging in asphalt

Oxidative aging is an important factor in the long term performance of asphalt pavements. Oxidation and the associated stiffening can lead to cracking, which in turn can lead to the functional and structural failure of the pavement system. Therefore, a greater understanding of the nature of oxidative aging in asphalt pavements can potentially be of great importance in estimating the performance of a pavement before it is constructed. Of particular interest are the effects of aging on asphalt rubber pavements, due to the fact that, as a newer technology, few asphalt rubber pavement sections have been evaluated for their full service life. This study endeavors to shed some light on this topic. This study includes three experimental programs on the aging of asphalt rubber binders and mixtures. The first phase addresses aging in asphalt rubber binders and their virgin bases. The binders were subjected to various aging conditions and then tested for viscosity. The change in viscosity was analyzed and it was found that asphalt rubber binders exhibited less long term aging. The second phase looks at aging in a laboratory environment, including both a comparison of accelerated oxidative aging techniques and aging effects that occur during long term storage. Dynamic modulus was used as a tool to assess the aging of the tested materials. It was found that aging materials in a compacted state is ideal, while aging in a loose state is unrealistic. Results not only showed a clear distinction in aged versus unaged material but also showed that the effects of aging on AR mixes is highly dependant on temperature; lower temperatures induce relatively minor stiffening while higher temperatures promote much more significant aging effects. The third experimental program is a field study that builds upon a previous study of pavement test sections. Field pavement samples were taken and tested after being in service for 7 years and tested for dynamic modulus and beam fatigue. As with the laboratory aging, the dynamic modulus samples show less stiffening at low temperatures and more at higher temperatures. Beam fatigue testing showed not only stiffening but also a brittle behavior.
ContributorsReed, Jordan (Author) / Kaloush, Kamil (Thesis advisor) / Mamlouk, Michael (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2010
149519-Thumbnail Image.png
Description
In the middle of the 20th century in the United States, transportation and infrastructure development became a priority on the national agenda, instigating the development of mathematical models that would predict transportation network performance. Approximately 40 years later, transportation planning models again became a national priority, this time instigating the

In the middle of the 20th century in the United States, transportation and infrastructure development became a priority on the national agenda, instigating the development of mathematical models that would predict transportation network performance. Approximately 40 years later, transportation planning models again became a national priority, this time instigating the development of highly disaggregate activity-based traffic models called microsimulations. These models predict the travel on a network at the level of the individual decision-maker, but do so with a large computational complexity and processing time requirement. The vast resources and steep learning curve required to integrate microsimulation models into the general transportation plan have deterred planning agencies from incorporating these tools. By researching the stochastic variability in the results of a microsimulation model with varying random number seeds, this paper evaluates the number of simulation trials necessary, and therefore the computational effort, for a planning agency to reach stable model outcomes. The microsimulation tool used to complete this research is the Transportation Analysis and Simulation System (TRANSIMS). The requirements for initiating a TRANSIMS simulation are described in the paper. Two analysis corridors are chosen in the Metropolitan Phoenix Area, and the roadway performance characteristics volume, vehicle-miles of travel, and vehicle-hours of travel are examined in each corridor under both congested and uncongested conditions. Both congested and uncongested simulations are completed in twenty trials, each with a unique random number seed. Performance measures are averaged for each trial, providing a distribution of average performance measures with which to test the stability of the system. The results of this research show that the variability in outcomes increases with increasing congestion. Although twenty trials are sufficient to achieve stable solutions for the uncongested state, convergence in the congested state is not achieved. These results indicate that a highly congested urban environment requires more than twenty simulation runs for each tested scenario before reaching a solution that can be assumed to be stable. The computational effort needed for this type of analysis is something that transportation planning agencies should take into consideration before beginning a traffic microsimulation program.
ContributorsZiems, Sarah Elia (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Kaloush, Kamil (Committee member) / Arizona State University (Publisher)
Created2010
Description

Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with

Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.

ContributorsLi, Vincent (Author) / Turaga, Pavan (Thesis director) / Buman, Matthew (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149462-Thumbnail Image.png
Description
Rapid developments are occurring in the arena of activity-based microsimulation models. Advances in computational power, econometric methodologies and data collection have all contributed to the development of microsimulation tools for planning applications. There has also been interest in modeling child daily activity-travel patterns and their influence on those of adults

Rapid developments are occurring in the arena of activity-based microsimulation models. Advances in computational power, econometric methodologies and data collection have all contributed to the development of microsimulation tools for planning applications. There has also been interest in modeling child daily activity-travel patterns and their influence on those of adults in the household using activity-based microsimulation tools. It is conceivable that most of the children are largely dependent on adults for their activity engagement and travel needs and hence would have considerable influence on the activity-travel schedules of adult members in the household. In this context, a detailed comparison of various activity-travel characteristics of adults in households with and without children is made using the National Household Travel Survey (NHTS) data. The analysis is used to quantify and decipher the nature of the impact of activities of children on the daily activity-travel patterns of adults. It is found that adults in households with children make a significantly higher proportion of high occupancy vehicle (HOV) trips and lower proportion of single occupancy vehicle (SOV) trips when compared to those in households without children. They also engage in more serve passenger activities and fewer personal business, shopping and social activities. A framework for modeling activities and travel of dependent children is proposed. The framework consists of six sub-models to simulate the choice of going to school/pre-school on a travel day, the dependency status of the child, the activity type, the destination, the activity duration, and the joint activity engagement with an accompanying adult. Econometric formulations such as binary probit and multinomial logit are used to obtain behaviorally intuitive models that predict children's activity skeletons. The model framework is tested using a 5% sample of a synthetic population of children for Maricopa County, Arizona and the resulting patterns are validated against those found in NHTS data. Microsimulation of these dependencies of children can be used to constrain the adult daily activity schedules. The deployment of this framework prior to the simulation of adult non-mandatory activities is expected to significantly enhance the representation of the interactions between children and adults in activity-based microsimulation models.
ContributorsSana, Bhargava (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Kaloush, Kamil (Committee member) / Arizona State University (Publisher)
Created2010
131537-Thumbnail Image.png
Description
At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment.

At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment. An automated, stable, and accurate method to evaluate Parkinson’s would be significant in streamlining diagnoses of patients and providing families more time for corrective measures. We propose a methodology which incorporates TDA into analyzing Parkinson’s disease postural shifts data through the representation of persistence images. Studying the topology of a system has proven to be invariant to small changes in data and has been shown to perform well in discrimination tasks. The contributions of the paper are twofold. We propose a method to 1) classify healthy patients from those afflicted by disease and 2) diagnose the severity of disease. We explore the use of the proposed method in an application involving a Parkinson’s disease dataset comprised of healthy-elderly, healthy-young and Parkinson’s disease patients.
ContributorsRahman, Farhan Nadir (Co-author) / Nawar, Afra (Co-author) / Turaga, Pavan (Thesis director) / Krishnamurthi, Narayanan (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description
Asphalt pavements deteriorate over time and are subjected to various distresses like rutting, fatigue cracking, stripping, raveling, etc. In this study, an experiment to indirectly assess aggregate stripping was completed in order to evaluate the effect of type of binder, and aging on the binder-aggregate bond under dry conditioning. The

Asphalt pavements deteriorate over time and are subjected to various distresses like rutting, fatigue cracking, stripping, raveling, etc. In this study, an experiment to indirectly assess aggregate stripping was completed in order to evaluate the effect of type of binder, and aging on the binder-aggregate bond under dry conditioning. The asphalts used in the study are commonly used in the state of Arizona, which included both non-polymer modified and polymer modified asphalts. The phenomenon of stripping was simulated using the Bitumen Bond Strength Test (BBS) and evaluated for Arizona binders. The BBS test is a simple test that measures the "pull-off" tensile strength of the bond between asphalt and the aggregate. Polymer modified binders were found to have lower pull-off strength in comparison to the non-modified or neat binder which were found to possess greater pull-off strength, but lower elasticity, causing the failure to become brittle and spontaneous. However, when aged binder was used, the bond strength expectedly reduced for non-polymer modified asphalts but surprisingly increased for polymer modified asphalts. Both un-aged neat and polymer modified binders were observed to have a cohesive failure whereas only the aged polymer modified binders failed in cohesion. The aged non-polymer modified binders were seen to have an adhesive failure.
ContributorsPonce, Esai Jonathon (Author) / Kaloush, Kamil (Thesis director) / Gundla, Akshay (Committee member) / Civil, Environmental and Sustainable Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Video summarization is gaining popularity in the technological culture, where positioning the mouse pointer on top of a video results in a quick overview of what the video is about. The algorithm usually selects frames in a time sequence through systematic sampling. Invariably, there are other applications like video surveillance,

Video summarization is gaining popularity in the technological culture, where positioning the mouse pointer on top of a video results in a quick overview of what the video is about. The algorithm usually selects frames in a time sequence through systematic sampling. Invariably, there are other applications like video surveillance, web-based video surfing and video archival applications which can benefit from efficient and concise video summaries. In this project, we explored several clustering algorithms and how these can be combined and deconstructed to make summarization algorithm more efficient and relevant. We focused on two metrics to summarize: reducing error and redundancy in the summary. To reduce the error online k-means clustering algorithm was used; to reduce redundancy we applied two different methods: volume of convex hulls and the true diversity measure that is usually used in biological disciplines. The algorithm was efficient and computationally cost effective due to its online nature. The diversity maximization (or redundancy reduction) using technique of volume of convex hulls showed better results compared to other conventional methods on 50 different videos. For the true diversity measure, there has not been much work done on the nature of the measure in the context of video summarization. When we applied it, the algorithm stalled due to the true diversity saturating because of the inherent initialization present in the algorithm. We explored the nature of this measure to gain better understanding on how it can help to make summarization more intuitive and give the user a handle to customize the summary.
ContributorsMasroor, Ahnaf (Co-author) / Anirudh, Rushil (Co-author) / Turaga, Pavan (Thesis director) / Spanias, Andreas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
160731-Thumbnail Image.png
Description

The City of Phoenix Street Transportation Department partnered with the Rob and Melani Walton Sustainability Solutions Service at Arizona State University (ASU) and researchers from various ASU schools to evaluate the effectiveness, performance, and community perception of the new pavement coating. The data collection and analysis occurred across multiple neighborhoods

The City of Phoenix Street Transportation Department partnered with the Rob and Melani Walton Sustainability Solutions Service at Arizona State University (ASU) and researchers from various ASU schools to evaluate the effectiveness, performance, and community perception of the new pavement coating. The data collection and analysis occurred across multiple neighborhoods and at varying times across days and/or months over the course of one year (July 15, 2020–July 14, 2021), allowing the team to study the impacts of the surface treatment under various weather conditions.

Created2021-09
168684-Thumbnail Image.png
Description本文对中国制药企业并购溢价影响因素进行了研究,提出了对制药企业并购非常重要的两个新的影响因素:可生产药品批文和在研新药批文。本文以2011年1月—2019年12月间我国制药行业上市公司并购事件为样本,对在研新药和可生产药品批文的价值从四个维度度量:是否有在研新药和可生产药品批文;在研新药数量及可生产药品批文数量;根据创新药和仿制药两个类别进行细分;标的企业所拥有的在研新药和可生产药品批文的市场价值。论文发现药品批文对企业并购溢价的影响不是很显著。进一步的,本文探究了药品批文对主并企业的对被并购公司的估值的影响。实证结果表明,我国制药企业在并购估值时确实会考虑到在研新药和可生产药品批文的价值。本文还发现对于可生产药品来说,相对创新药,被并购公司持有的仿制药批文影响更显著。而对于在研新药来说,主并企业更看重在研的创新药,在研仿制药对并购估值的影响不大。最后,本文选取了两个代表性案例进一步分析和探讨药品批文对企业并购的影响。
ContributorsYe, Tao (Author) / Shen, Wei (Thesis advisor) / Chang, Chun (Thesis advisor) / Jiang, Zhan (Committee member) / Gu, Bin (Committee member) / Arizona State University (Publisher)
Created2022
168670-Thumbnail Image.png
Description汽车行业属于国家支柱型产业,创造了高额的产值,增加了就业岗位。随着汽车生产行业竞争日趋激烈的趋势影响,汽车经销商在未来会出现明显的分化,并且逐步向头部集中。基于这样的行业背景,本项研究开展汽车经销商整体经营和盈利能力等方面的详细深入分析,即系统整合汽车经销商业务运营层面和财务层面数据,结合统计研究方法,对经销商盈利能力进行系统且详实归因分析,从而试别驱动盈利能力的关键业务要素。其研究成果能够完善对行业发展规律和经营模式系统性理解,从而进一步指导该领域的相关业务实践,提高经销商整体经营业绩。本课题通过四个阶段来开展经销商整体经营与盈利归因的相关研究。首先,本课题梳理了中国汽车消费行业发展的历史,同时阐述样本期内(2018-2020年)国内宏观经济和汽车消费市场的特征进行,并介绍X品牌汽车经销商的地理分布、资质和业绩评级体系、自身经营特征以及汽车生产商对经销商扶持政策等方面。在第二阶段,本课题聚焦研究假设、模型与方法,通过对X品牌汽车经销商的业务结构和运营管理开展分析,并逐步识别影响经销商盈利的关键指标变量,并提出研究假设和相关模型(即时间序列模型和面板回归模型)。在第三阶段,本课题首先开展经销商相关信息整体性统计分析,获得关键业务指标在样本期内动态特征,并结合时间序列回归模型探讨各项业务指标对经销商整体盈利能力的影响程度。在第四阶段,本课题采用(个体)固定效应的面板回归模型来研究不同组别(控制)条件下经销商盈利能力的影响因素以及其盈利能力对这些因素的敏感程度,从而更深入和全面地揭示影响经销商盈利能力的潜在因素。 基于上述四阶段的研究结果,本研究进一步就提升经销商盈利能力展开讨论,并提出相应对策。本课题相关结论仅从X品牌汽车经销商经营和财务数据进行定性和定量分析获得,但衷心希望本研究的成果能够对汽车经销商改善经营业务方面能起到实践上的借鉴和指导意义。
ContributorsPan, Guangxiong (Author) / Shen, Wei (Thesis advisor) / Wu, Fei (Thesis advisor) / Zhu, Qigui (Committee member) / Arizona State University (Publisher)
Created2022