Filtering by
- All Subjects: engineering
- Genre: Doctoral Dissertation
- Creators: Arizona State University
Three dilemmas plague governance of scientific research and technological
innovation: the dilemma of orientation, the dilemma of legitimacy, and the dilemma of control. The dilemma of orientation risks innovation heedless of long-term implications. The dilemma of legitimacy grapples with delegation of authority in democracies, often at the expense of broader public interest. The dilemma of control poses that the undesirable implications of new technologies are hard to grasp, yet once grasped, all too difficult to remedy. That humanity has innovated itself into the sustainability crisis is a prime manifestation of these dilemmas.
Responsible innovation (RI), with foci on anticipation, inclusion, reflection, coordination, and adaptation, aims to mitigate dilemmas of orientation, legitimacy, and control. The aspiration of RI is to bend the processes of technology development toward more just, sustainable, and societally desirable outcomes. Despite the potential for fruitful interaction across RI’s constitutive domains—sustainability science and social studies of science and technology—most sustainability scientists under-theorize the sociopolitical dimensions of technological systems and most science and technology scholars hesitate to take a normative, solutions-oriented stance. Efforts to advance RI, although notable, entail one-off projects that do not lend themselves to comparative analysis for learning.
In this dissertation, I offer an intervention research framework to aid systematic study of intentional programs of change to advance responsible innovation. Two empirical studies demonstrate the framework in application. An evaluation of Science Outside the Lab presents a program to help early-career scientists and engineers understand the complexities of science policy. An evaluation of a Community Engagement Workshop presents a program to help engineers better look beyond technology, listen to and learn from people, and empower communities. Each program is efficacious in helping scientists and engineers more thoughtfully engage with mediators of science and technology governance dilemmas: Science Outside the Lab in revealing the dilemmas of orientation and legitimacy; Community Engagement Workshop in offering reflexive and inclusive approaches to control. As part of a larger intervention research portfolio, these and other projects hold promise for aiding governance of science and technology through responsible innovation.
The robustness of a neural network is defined as the stability of the network output under small input perturbations. It has been shown that neural networks are very sensitive to input perturbations, and the prediction from convolutional neural networks can be totally different for input images that are visually indistinguishable to human eyes. Based on such property, hackers can reversely engineer the input to trick machine learning systems in targeted ways. These adversarial attacks have shown to be surprisingly effective, which has raised serious concerns over safety-critical applications like autonomous driving. In the meantime, many established defense mechanisms have shown to be vulnerable under more advanced attacks proposed later, and how to improve the robustness of neural networks is still an open question.
The generalizability of neural networks refers to the ability of networks to perform well on unseen data rather than just the data that they were trained on. Neural networks often fail to carry out reliable generalizations when the testing data is of different distribution compared with the training one, which will make autonomous driving systems risky under new environment. The generalizability of neural networks can also be limited whenever there is a scarcity of training data, while it can be expensive to acquire large datasets either experimentally or numerically for engineering applications, such as material and chemical design.
In this dissertation, we are thus motivated to improve the robustness and generalizability of neural networks. Firstly, unlike traditional bottom-up classifiers, we use a pre-trained generative model to perform top-down reasoning and infer the label information. The proposed generative classifier has shown to be promising in handling input distribution shifts. Secondly, we focus on improving the network robustness and propose an extension to adversarial training by considering the transformation invariance. Proposed method improves the robustness over state-of-the-art methods by 2.5% on MNIST and 3.7% on CIFAR-10. Thirdly, we focus on designing networks that generalize well at predicting physics response. Our physics prior knowledge is used to guide the designing of the network architecture, which enables efficient learning and inference. Proposed network is able to generalize well even when it is trained with a single image pair.