Matching Items (2)
Filtering by

Clear all filters

187480-Thumbnail Image.png
Description中国大陆证券市场上的A、B股市场,是世界独特的分割市场,其中,双重上市公司A、B股(以下简称AB股),同股同权,但B股相对A股价格长期折价,被称为“B股难题”(B Share Puzzle), 这是国际资本市场上的一个热点问题,此相关问题研究也一直延续。本文尝试研究中国政府出台的对股市长期发展进行调节的政策与B股折价之间的关系,通过对AB股发展历史的回顾,梳理出二个对AB股长期发展干预和调节的政策,即2001年2月中国政府允许中国大陆居民投资B股(简称政策一)和2005年4月29日开始的中国证券市场股权分置改革(简称政策二),并在此基础上,运用计量统计方法实证分析,研究发现中国政府出台的对股市长期发展进行调节的政策一、政策二与B股折价率有显著相关性,同时政策的干预和调节是分别有针对性进行的,使得B股折价率变化在政策影响下,通过A股价格或者B股价格的显著变化而实现。另外发现,B股平均折价率具有波动聚集特性,有小幅波动和均值回归特点,具有可预测性。
ContributorsLiu, Li (Author) / Li, Hongmin (Thesis advisor) / Zhang, Jie (Thesis advisor) / Chen, Hui (Committee member) / Arizona State University (Publisher)
Created2023
158066-Thumbnail Image.png
Description
Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid

Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision.

To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data.

To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency.

In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training.
ContributorsZhang, Jie (Author) / Wang, Yalin (Thesis advisor) / Liu, Huan (Committee member) / Stonnington, Cynthia (Committee member) / Liang, Jianming (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020