Matching Items (17)
Filtering by

Clear all filters

151513-Thumbnail Image.png
Description
Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material,

Ball Grid Array (BGA) using lead-free or lead-rich solder materials are widely used as Second Level Interconnects (SLI) in mounting packaged components to the printed circuit board (PCB). The reliability of these solder joints is of significant importance to the performance of microelectronics components and systems. Product design/form-factor, solder material, manufacturing process, use condition, as well as, the inherent variabilities present in the system, greatly influence product reliability. Accurate reliability analysis requires an integrated approach to concurrently account for all these factors and their synergistic effects. Such an integrated and robust methodology can be used in design and development of new and advanced microelectronics systems and can provide significant improvement in cycle-time, cost, and reliability. IMPRPK approach is based on a probabilistic methodology, focusing on three major tasks of (1) Characterization of BGA solder joints to identify failure mechanisms and obtain statistical data, (2) Finite Element analysis (FEM) to predict system response needed for life prediction, and (3) development of a probabilistic methodology to predict the reliability, as well as, the sensitivity of the system to various parameters and the variabilities. These tasks and the predictive capabilities of IMPRPK in microelectronic reliability analysis are discussed.
ContributorsFallah-Adl, Ali (Author) / Tasooji, Amaneh (Thesis advisor) / Krause, Stephen (Committee member) / Alford, Terry (Committee member) / Jiang, Hanqing (Committee member) / Mahajan, Ravi (Committee member) / Arizona State University (Publisher)
Created2013
151351-Thumbnail Image.png
Description
Dealloying induced stress corrosion cracking is particularly relevant in energy conversion systems (both nuclear and fossil fuel) as many failures in alloys such as austenitic stainless steel and nickel-based systems result directly from dealloying. This study provides evidence of the role of unstable dynamic fracture processes in dealloying induced stress-corrosion

Dealloying induced stress corrosion cracking is particularly relevant in energy conversion systems (both nuclear and fossil fuel) as many failures in alloys such as austenitic stainless steel and nickel-based systems result directly from dealloying. This study provides evidence of the role of unstable dynamic fracture processes in dealloying induced stress-corrosion cracking of face-centered cubic alloys. Corrosion of such alloys often results in the formation of a brittle nanoporous layer which we hypothesize serves to nucleate a crack that owing to dynamic effects penetrates into the un-dealloyed parent phase alloy. Thus, since there is essentially a purely mechanical component of cracking, stress corrosion crack propagation rates can be significantly larger than that predicted from electrochemical parameters. The main objective of this work is to examine and test this hypothesis under conditions relevant to stress corrosion cracking. Silver-gold alloys serve as a model system for this study since hydrogen effects can be neglected on a thermodynamic basis, which allows us to focus on a single cracking mechanism. In order to study various aspects of this problem, the dynamic fracture properties of monolithic nanoporous gold (NPG) were examined in air and under electrochemical conditions relevant to stress corrosion cracking. The detailed processes associated with the crack injection phenomenon were also examined by forming dealloyed nanoporous layers of prescribed properties on un-dealloyed parent phase structures and measuring crack penetration distances. Dynamic fracture in monolithic NPG and in crack injection experiments was examined using high-speed (106 frames s-1) digital photography. The tunable set of experimental parameters included the NPG length scale (20-40 nm), thickness of the dealloyed layer (10-3000 nm) and the electrochemical potential (0.5-1.5 V). The results of crack injection experiments were characterized using the dual-beam focused ion beam/scanning electron microscopy. Together these tools allow us to very accurately examine the detailed structure and composition of dealloyed grain boundaries and compare crack injection distances to the depth of dealloying. The results of this work should provide a basis for new mathematical modeling of dealloying induced stress corrosion cracking while providing a sound physical basis for the design of new alloys that may not be susceptible to this form of cracking. Additionally, the obtained results should be of broad interest to researchers interested in the fracture properties of nano-structured materials. The findings will open up new avenues of research apart from any implications the study may have for stress corrosion cracking.
ContributorsSun, Shaofeng (Author) / Sieradzki, Karl (Thesis advisor) / Jiang, Hanqing (Committee member) / Peralta, Pedro (Committee member) / Arizona State University (Publisher)
Created2012
151443-Thumbnail Image.png
Description
The focus of this investigation includes three aspects. First, the development of nonlinear reduced order modeling techniques for the prediction of the response of complex structures exhibiting "large" deformations, i.e. a geometrically nonlinear behavior, and modeled within a commercial finite element code. The present investigation builds on a general methodology,

The focus of this investigation includes three aspects. First, the development of nonlinear reduced order modeling techniques for the prediction of the response of complex structures exhibiting "large" deformations, i.e. a geometrically nonlinear behavior, and modeled within a commercial finite element code. The present investigation builds on a general methodology, successfully validated in recent years on simpler panel structures, by developing a novel identification strategy of the reduced order model parameters, that enables the consideration of the large number of modes needed for complex structures, and by extending an automatic strategy for the selection of the basis functions used to represent accurately the displacement field. These novel developments are successfully validated on the nonlinear static and dynamic responses of a 9-bay panel structure modeled within Nastran. In addition, a multi-scale approach based on Component Mode Synthesis methods is explored. Second, an assessment of the predictive capabilities of nonlinear reduced order models for the prediction of the large displacement and stress fields of panels that have a geometric discontinuity; a flat panel with a notch was used for this assessment. It is demonstrated that the reduced order models of both virgin and notched panels provide a close match of the displacement field obtained from full finite element analyses of the notched panel for moderately large static and dynamic responses. In regards to stresses, it is found that the notched panel reduced order model leads to a close prediction of the stress distribution obtained on the notched panel as computed by the finite element model. Two enrichment techniques, based on superposition of the notch effects on the virgin panel stress field, are proposed to permit a close prediction of the stress distribution of the notched panel from the reduced order model of the virgin one. A very good prediction of the full finite element results is achieved with both enrichments for static and dynamic responses. Finally, computational challenges associated with the solution of the reduced order model equations are discussed. Two alternatives to reduce the computational time for the solution of these problems are explored.
ContributorsPerez, Ricardo Angel (Author) / Mignolet, Marc (Thesis advisor) / Oswald, Jay (Committee member) / Spottswood, Stephen (Committee member) / Peralta, Pedro (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
152040-Thumbnail Image.png
Description
"Sensor Decade" has been labeled on the first decade of the 21st century. Similar to the revolution of micro-computer in 1980s, sensor R&D; developed rapidly during the past 20 years. Hard workings were mainly made to minimize the size of devices with optimal the performance. Efforts to develop the small

"Sensor Decade" has been labeled on the first decade of the 21st century. Similar to the revolution of micro-computer in 1980s, sensor R&D; developed rapidly during the past 20 years. Hard workings were mainly made to minimize the size of devices with optimal the performance. Efforts to develop the small size devices are mainly concentrated around Micro-electro-mechanical-system (MEMS) technology. MEMS accelerometers are widely published and used in consumer electronics, such as smart phones, gaming consoles, anti-shake camera and vibration detectors. This study represents liquid-state low frequency micro-accelerometer based on molecular electronic transducer (MET), in which inertial mass is not the only but also the conversion of mechanical movement to electric current signal is the main utilization of the ionic liquid. With silicon-based planar micro-fabrication, the device uses a sub-micron liter electrolyte droplet sealed in oil as the sensing body and a MET electrode arrangement which is the anode-cathode-cathode-anode (ACCA) in parallel as the read-out sensing part. In order to sensing the movement of ionic liquid, an imposed electric potential was applied between the anode and the cathode. The electrode reaction, I_3^-+2e^___3I^-, occurs around the cathode which is reverse at the anodes. Obviously, the current magnitude varies with the concentration of ionic liquid, which will be effected by the movement of liquid droplet as the inertial mass. With such structure, the promising performance of the MET device design is to achieve 10.8 V/G (G=9.81 m/s^2) sensitivity at 20 Hz with the bandwidth from 1 Hz to 50 Hz, and a low noise floor of 100 ug/sqrt(Hz) at 20 Hz.
ContributorsLiang, Mengbing (Author) / Yu, Hongyu (Thesis advisor) / Jiang, Hanqing (Committee member) / Kozicki, Micheal (Committee member) / Arizona State University (Publisher)
Created2013
150599-Thumbnail Image.png
Description
Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's

Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.
ContributorsMcDaniel, Troy Lee (Author) / Panchanathan, Sethuraman (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
149539-Thumbnail Image.png
Description
The rheological properties at liquid-liquid interfaces are important in many industrial processes such as manufacturing foods, pharmaceuticals, cosmetics, and petroleum products. This dissertation focuses on the study of linear viscoelastic properties at liquid-liquid interfaces by tracking the thermal motion of particles confined at the interfaces. The technique of interfacial microrheology

The rheological properties at liquid-liquid interfaces are important in many industrial processes such as manufacturing foods, pharmaceuticals, cosmetics, and petroleum products. This dissertation focuses on the study of linear viscoelastic properties at liquid-liquid interfaces by tracking the thermal motion of particles confined at the interfaces. The technique of interfacial microrheology is first developed using one- and two-particle tracking, respectively. In one-particle interfacial microrheology, the rheological response at the interface is measured from the motion of individual particles. One-particle interfacial microrheology at polydimethylsiloxane (PDMS) oil-water interfaces depends strongly on the surface chemistry of different tracer particles. In contrast, by tracking the correlated motion of particle pairs, two-particle interfacial microrheology significantly minimizes the effects from tracer particle surface chemistry and particle size. Two-particle interfacial microrheology is further applied to study the linear viscoelastic properties of immiscible polymer-polymer interfaces. The interfacial loss and storage moduli at PDMS-polyethylene glycol (PEG) interfaces are measured over a wide frequency range. The zero-shear interfacial viscosity, estimated from the Cross model, falls between the bulk viscosities of two individual polymers. Surprisingly, the interfacial relaxation time is observed to be an order of magnitude larger than that of the PDMS bulk polymers. To explore the fundamental basis of interfacial nanorheology, molecular dynamics (MD) simulations are employed to investigate the nanoparticle dynamics. The diffusion of single nanoparticles in pure water and low-viscosity PDMS oils is reasonably consistent with the prediction by the Stokes-Einstein equation. To demonstrate the potential of nanorheology based on the motion of nanoparticles, the shear moduli and viscosities of the bulk phases and interfaces are calculated from single-nanoparticle tracking. Finally, the competitive influences of nanoparticles and surfactants on other interfacial properties, such as interfacial thickness and interfacial tension are also studied by MD simulations.
ContributorsSong, Yanmei (Author) / Dai, Lenore L (Thesis advisor) / Jiang, Hanqing (Committee member) / Lin, Jerry Y S (Committee member) / Raupp, Gregory B (Committee member) / Sierks, Michael R (Committee member) / Arizona State University (Publisher)
Created2011
190719-Thumbnail Image.png
Description
Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate

Social media platforms provide a rich environment for analyzing user behavior. Recently, deep learning-based methods have been a mainstream approach for social media analysis models involving complex patterns. However, these methods are susceptible to biases in the training data, such as participation inequality. Basically, a mere 1% of users generate the majority of the content on social networking sites, while the remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent. These silent users consume and listen to information that is propagated on the platform.However, their voice, attitude, and interests are not reflected in the online content, making the decision of the current methods predisposed towards the opinion of the active users. So models can mistake the loudest users for the majority. To make the silent majority heard is to reveal the true landscape of the platform. In this dissertation, to compensate for this bias in the data, which is related to user-level data scarcity, I introduce three pieces of research work. Two of these proposed solutions deal with the data on hand while the other tries to augment the current data. Specifically, the first proposed approach modifies the weight of users' activity/interaction in the input space, while the second approach involves re-weighting the loss based on the users' activity levels during the downstream task training. Lastly, the third approach uses large language models (LLMs) and learns the user's writing behavior to expand the current data. In other words, by utilizing LLMs as a sophisticated knowledge base, this method aims to augment the silent user's data.
ContributorsKarami, Mansooreh (Author) / Liu, Huan (Thesis advisor) / Sen, Arunabha (Committee member) / Davulcu, Hasan (Committee member) / Mancenido, Michelle V. (Committee member) / Arizona State University (Publisher)
Created2023
171862-Thumbnail Image.png
Description
Deep neural networks have been shown to be vulnerable to adversarial attacks. Typical attack strategies alter authentic data subtly so as to obtain adversarial samples that resemble the original but otherwise would cause a network's misbehavior such as a high misclassification rate. Various attack approaches have been reported, with some

Deep neural networks have been shown to be vulnerable to adversarial attacks. Typical attack strategies alter authentic data subtly so as to obtain adversarial samples that resemble the original but otherwise would cause a network's misbehavior such as a high misclassification rate. Various attack approaches have been reported, with some showing state-of-the-art performance in attacking certain networks. In the meanwhile, many defense mechanisms have been proposed in the literature, some of which are quite effective for guarding against typical attacks. Yet, most of these attacks fail when the targeted network modifies its architecture or uses another set of parameters and vice versa. Moreover, the emerging of more advanced deep neural networks, such as generative adversarial networks (GANs), has made the situation more complicated and the game between the attack and defense is continuing. This dissertation aims at exploring the venerability of the deep neural networks by investigating the mechanisms behind the success/failure of the existing attack and defense approaches. Therefore, several deep learning-based approaches have been proposed to study the problem from different perspectives. First, I developed an adversarial attack approach by exploring the unlearned region of a typical deep neural network which is often over-parameterized. Second, I proposed an end-to-end learning framework to analyze the images generated by different GAN models. Third, I developed a defense mechanism that can secure the deep neural network against adversarial attacks with a defense layer consisting of a set of orthogonal kernels. Substantial experiments are conducted to unveil the potential factors that contribute to attack/defense effectiveness. This dissertation also concludes with a discussion of possible future works of achieving a robust deep neural network.
ContributorsDing, Yuzhen (Author) / Li, Baoxin (Thesis advisor) / Davulcu, Hasan (Committee member) / Venkateswara, Hemanth Kumar Demakethepalli (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
187520-Thumbnail Image.png
Description
Modern data center networks require efficient and scalable security analysis approaches that can analyze the relationship between the vulnerabilities. Utilizing the Attack Representation Methods (ARMs) and Attack Graphs (AGs) enables the security administrator to understand the cloud network’s current security situation at the low-level. However, the AG approach suffers from

Modern data center networks require efficient and scalable security analysis approaches that can analyze the relationship between the vulnerabilities. Utilizing the Attack Representation Methods (ARMs) and Attack Graphs (AGs) enables the security administrator to understand the cloud network’s current security situation at the low-level. However, the AG approach suffers from scalability challenges. It relies on the connectivity between the services and the vulnerabilities associated with the services to allow the system administrator to realize its security state. In addition, the security policies created by the administrator can have conflicts among them, which is often detected in the data plane of the Software Defined Networking (SDN) system. Such conflicts can cause security breaches and increase the flow rules processing delay. This dissertation addresses these challenges with novel solutions to tackle the scalability issue of Attack Graphs and detect security policy conflictsin the application plane before they are transmitted into the data plane for final installation. Specifically, it introduces a segmentation-based scalable security state (S3) framework for the cloud network. This framework utilizes the well-known divide-and-conquer approach to divide the large network region into smaller, manageable segments. It follows a well-known segmentation approach derived from the K-means clustering algorithm to partition the system into segments based on the similarity between the services. Furthermore, the dissertation presents unified intent rules that abstract the network administration from the underlying network controller’s format. It develops a networking service solution to use a bounded formal model for network service compliance checking that significantly reduces the complexity of flow rule conflict checking at the data plane level. The solution can be expended from a single SDN domain to multiple SDN domains and hybrid networks by applying network service function chaining (SFC) for inter-domain policy management.
ContributorsSabur, Abdulhakim (Author) / Zhao, Ming (Thesis advisor) / Xue, Guoliang (Committee member) / Davulcu, Hasan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2023
157028-Thumbnail Image.png
Description
Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness,

Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness, but limited studies compared different statistical techniques with latest frameworks, and interpreted models in a unified approach.

In this thesis, several data mining algorithms have been applied to analyze students’ code assignment submission data from a real classroom study. The goal of this work is to explore

and predict students’ performances. Multiple machine learning models and the model accuracy were evaluated based on the Shapley Additive Explanation.

The Cross-Validation shows the Gradient Boosting Decision Tree has the best precision 85.93% with average 82.90%. Features like Component grade, Due Date, Submission Times have higher impact than others. Baseline model received lower precision due to lack of non-linear fitting.
ContributorsTian, Wenbo (Author) / Hsiao, Ihan (Thesis advisor) / Bazzi, Rida (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2019