Matching Items (2)
Filtering by

Clear all filters

187497-Thumbnail Image.png
Description中资地产美元债近年来蓬勃发展,潜力巨大,成为市场的新亮点。截至2021年末,地产债是仅次于金融债的第二大中资离岸债品种。但是,目前关于中资地产美元债的实证研究几乎是空白。本文选取2017年初至2021年末发行的所有中资地产美元债为样本,通过多元线性回归的方法,构建中资地产美元债一级市场发行定价模型,深入分析中资地产美元债发行信用利差的影响因素。基于实践,本文创新性地选择房企基本面、美联储货币政策、房地产调控强度的代理变量,用实证方法来考察各界关注且影响市场发展的关键问题,包括中资地产美元债定价影响因素和作用机制等。研究发现,(1)发行人土地储备规模与中资地产美元债发行利差显著负相关;(2)人民币兑美元汇率与中资地产美元债发行利差显著正相关;(3)房地产调控政策强度与中资地产美元债发行利差显著正相关;(4)对于高收益板块发行人,净负债率指标对发行利差影响并不显著,但调控政策影响显著增强。基于此,本文就发行人合理控制融资成本和推动中资美元债市场创新监管提出相关建议。
ContributorsCao, Ziyan (Author) / Shen, Wei (Thesis advisor) / Wu, Fei (Thesis advisor) / Zhang, Jie (Committee member) / Arizona State University (Publisher)
Created2023
158066-Thumbnail Image.png
Description
Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid

Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision.

To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data.

To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency.

In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training.
ContributorsZhang, Jie (Author) / Wang, Yalin (Thesis advisor) / Liu, Huan (Committee member) / Stonnington, Cynthia (Committee member) / Liang, Jianming (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020