Matching Items (2)
Filtering by

Clear all filters

158066-Thumbnail Image.png
Description
Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid

Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision.

To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data.

To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency.

In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training.
ContributorsZhang, Jie (Author) / Wang, Yalin (Thesis advisor) / Liu, Huan (Committee member) / Stonnington, Cynthia (Committee member) / Liang, Jianming (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
161352-Thumbnail Image.png
Description随着经济和社会的进步,企业不仅要以盈利为目标,也为利益相关者和生态环境负责并承担相应的社会责任。社会公众也日渐对企业社会责任问题加以重视,伴随着社会责任这一理念的深入,监管部门制定并出台了一系列与企业社会责任信息披露有关的政策和法规,用以规范和引导企业社会责任信息的披露工作。本文以有效市场理论、信息不对称理论和利益相关者理论为基础,将2010-2018年香港证券交易所上市公司为作为研究对象,运用实证研究的方法,将企业社会责任融入股票崩盘风险的研究视角。本文结合理论演绎和实证检验的方法,突破已有文献以收益框架为研究视角的限制,从金融资本市场的角度出发研究企业社会责任的崩盘效应,系统的探索了企业社会责任影响股票崩盘风险的效应及其影响因素。研究结果显示,对比未披露企业社会责任的公司而言,披露企业社会责任相关信息的公司,未来股价崩盘风险越小。基于香港股市主要以机构投资者为主,进一步考察了社会责任信息披露和机构投资者对股价未来崩盘风险的交互作用,研究发现在机构持股比例越低的公司中,企业社会责任信息披露对未来崩盘效应的抑制作用越明显。此外,本文以独立董事占董事会人员比例作为企业治理因素,探索了社会责任信息披露和董事会独立性对股价崩盘风险的交互作用,研究发现企业董事独立性越强,社会责任信息披露对股票崩盘风险的抑制作用更为显著。最后,相对于非国有企业而言,国有企业性质削弱了企业社会责任信息披露对未来崩盘效应的抑制作用。
ContributorsHe, Jie (Author) / Zhu, David, H. (Thesis advisor) / Zhang, Jie (Thesis advisor) / Hu, Jie (Committee member) / Arizona State University (Publisher)
Created2021