Matching Items (1,982)
Filtering by
- All Subjects: Jazz
- All Subjects: engineering
- All Subjects: Guitar music

Research was conducted to observe the effect of Number of Transparent Covers and Refractive Index on performance of a domestic Solar Water heating system. The enhancement of efficiency for solar thermal system is an emerging challenge. The knowledge gained from this research will enable to optimize the number of transparent covers and refractive index prior to develop a solar water heater with improved optical efficiency and thermal efficiency for the collector. Numerical simulation is conducted on the performance of the liquid flat plate collector for July 21st and October 21st from 8 am to 4 pm with different refractive index values 1.1, 1.4, 1.7 and different numbers of transparent covers (0-3). In order to accomplish the proposed method the formulation and solutions are executed using simple software MATLAB. The result demonstrates efficiency of flat plate collector increases with the increase of number of covers. The performance of collector decreases when refractive index is higher. The improved useful heat gain is obtained when number of cover used is 3 and refractive index is 1.1.

Quantum resilience is a pragmatic theory that allows systems engineers to formally characterize the resilience of systems. As a generalized theory, it not only clarifies resilience in the literature, but also can be applied to all disciplines and domains of discourse. Operationalizing resilience in this manner permits decision-makers to compare and contrast system deployment options for suitability in a variety of environments and allows for consistent treatment of resilience across domains. Systems engineers, whether planning future infrastructures or managing ecosystems, are increasingly asked to deliver resilient systems. Quantum resilience provides a way forward that allows specific resilience requirements to be specified, validated, and verified.
Quantum resilience makes two very important claims. First, resilience cannot be characterized without recognizing both the system and the valued function it provides. Second, resilience is not about disturbances, insults, threats, or perturbations. To avoid crippling infinities, characterization of resilience must be accomplishable without disturbances in mind. In light of this, quantum resilience defines resilience as the extent to which a system delivers its valued functions, and characterizes resilience as a function of system productivity and complexity. System productivity vis-à-vis specified “valued functions” involves (1) the quanta of the valued function delivered, and (2) the number of systems (within the greater system) which deliver it. System complexity is defined structurally and relationally and is a function of a variety of items including (1) system-of-systems hierarchical decomposition, (2) interfaces and connections between systems, and (3) inter-system dependencies.
Among the important features of quantum resilience is that it can be implemented in any system engineering tool that provides sufficient design and specification rigor (i.e., one that supports standards like the Lifecycle and Systems Modeling languages and frameworks like the DoD Architecture Framework). Further, this can be accomplished with minimal software development and has been demonstrated in three model-based system engineering tools, two of which are commercially available, well-respected, and widely used. This pragmatic approach assures transparency and consistency in characterization of resilience in any discipline.

Increasing interest in individualized treatment strategies for prevention and treatment of health disorders has created a new application domain for dynamic modeling and control. Standard population-level clinical trials, while useful, are not the most suitable vehicle for understanding the dynamics of dosage changes to patient response. A secondary analysis of intensive longitudinal data from a naltrexone intervention for fibromyalgia examined in this dissertation shows the promise of system identification and control. This includes datacentric identification methods such as Model-on-Demand, which are attractive techniques for estimating nonlinear dynamical systems from noisy data. These methods rely on generating a local function approximation using a database of regressors at the current operating point, with this process repeated at every new operating condition. This dissertation examines generating input signals for data-centric system identification by developing a novel framework of geometric distribution of regressors and time-indexed output points, in the finite dimensional space, to generate sufficient support for the estimator. The input signals are generated while imposing “patient-friendly” constraints on the design as a means to operationalize single-subject clinical trials. These optimization-based problem formulations are examined for linear time-invariant systems and block-structured Hammerstein systems, and the results are contrasted with alternative designs based on Weyl's criterion. Numerical solution to the resulting nonconvex optimization problems is proposed through semidefinite programming approaches for polynomial optimization and nonlinear programming methods. It is shown that useful bounds on the objective function can be calculated through relaxation procedures, and that the data-centric formulations are amenable to sparse polynomial optimization. In addition, input design formulations are formulated for achieving a desired output and specified input spectrum. Numerical examples illustrate the benefits of the input signal design formulations including an example of a hypothetical clinical trial using the drug gabapentin. In the final part of the dissertation, the mixed logical dynamical framework for hybrid model predictive control is extended to incorporate a switching time strategy, where decisions are made at some integer multiple of the sample time, and manipulation of only one input at a given sample time among multiple inputs. These are considerations important for clinical use of the algorithm.

This dissertation focused on the development and application of state-of-the-art monitoring tools and analysis methods for tracking the fate of trace level contaminants in the natural and built water environments, using fipronil as a model; fipronil and its primary degradates (known collectively as fiproles) are among a group of trace level emerging environmental contaminants that are extremely potent arthropodic neurotoxins. The work further aimed to fill in data gaps regarding the presence and fate of fipronil in engineered water systems, specifically in a wastewater treatment plant (WWTP), and in an engineered wetland. A review of manual and automated “active” water sampling technologies motivated the development of two new automated samplers capable of in situ biphasic extraction of water samples across the bulk water/sediment interface of surface water systems. Combined with an optimized method for the quantification of fiproles, the newly developed In Situ Sampler for Biphasic water monitoring (IS2B) was deployed along with conventional automated water samplers, to study the fate and occurrence of fiproles in engineered water environments; continuous sampling over two days and subsequent analysis yielded average total fiprole concentrations in wetland surface water (9.9 ± 4.6 to 18.1 ± 4.6 ng/L) and wetland sediment pore water (9.1 ± 3.0 to 12.6 ± 2.1 ng/L). A mass balance of the WWTP located immediately upstream demonstrated unattenuated breakthrough of total fiproles through the WWTP with 25 ± 3 % of fipronil conversion to degradates, and only limited removal of total fiproles in the wetland (47 ± 13%). Extrapolation of local emissions (5–7 g/d) suggests nationwide annual fiprole loadings from WWTPs to U.S. surface waters on the order of about one half to three quarters of a metric tonne. The qualitative and quantitative data collected in this work have regulatory implications, and the sampling tools and analysis strategies described in this thesis have broad applicability in the assessment of risks posed by trace level environmental contaminants.

The demand for cleaner energy technology is increasing very rapidly. Hence it is
important to increase the eciency and reliability of this emerging clean energy technologies.
This thesis focuses on modeling and reliability of solar micro inverters. In
order to make photovoltaics (PV) cost competitive with traditional energy sources,
the economies of scale have been guiding inverter design in two directions: large,
centralized, utility-scale (500 kW) inverters vs. small, modular, module level (300
W) power electronics (MLPE). MLPE, such as microinverters and DC power optimizers,
oer advantages in safety, system operations and maintenance, energy yield,
and component lifetime due to their smaller size, lower power handling requirements,
and module-level power point tracking and monitoring capability [1]. However, they
suer from two main disadvantages: rst, depending on array topology (especially
the proximity to the PV module), they can be subjected to more extreme environments
(i.e. temperature cycling) during the day, resulting in a negative impact to
reliability; second, since solar installations can have tens of thousands to millions of
modules (and as many MLPE units), it may be dicult or impossible to track and
repair units as they go out of service. Therefore identifying the weak links in this
system is of critical importance to develop more reliable micro inverters.
While an overwhelming majority of time and research has focused on PV module
eciency and reliability, these issues have been largely ignored for the balance
of system components. As a relatively nascent industry, the PV power electronics
industry does not have the extensive, standardized reliability design and testing procedures
that exist in the module industry or other more mature power electronics
industries (e.g. automotive). To do so, the critical components which are at risk and
their impact on the system performance has to be studied. This thesis identies and
addresses some of the issues related to reliability of solar micro inverters.
This thesis presents detailed discussions on various components of solar micro inverter
and their design. A micro inverter with very similar electrical specications in
comparison with commercial micro inverter is modeled in detail and veried. Components
in various stages of micro inverter are listed and their typical failure mechanisms
are reviewed. A detailed FMEA is conducted for a typical micro inverter to identify
the weak links of the system. Based on the S, O and D metrics, risk priority number
(RPN) is calculated to list the critical at-risk components. Degradation of DC bus
capacitor is identied as one the failure mechanism and the degradation model is built
to study its eect on the system performance. The system is tested for surge immunity
using standard ring and combinational surge waveforms as per IEEE 62.41 and
IEC 61000-4-5 standards. All the simulation presented in this thesis is performed
using PLECS simulation software.

The International Standards Organization (ISO) documentation utilizes Fitts’ law to determine the usability of traditional input devices like mouse and touchscreens for one- or two-dimensional operations. To test the hypothesis that Fitts’ Law can be applied to hand/air gesture based computing inputs, Fitts’ multi-directional target acquisition task is applied to three gesture based input devices that utilize different technologies and two baseline devices, mouse and touchscreen. Three target distances and three target sizes were tested six times in a randomized order with a randomized order of the five input technologies. A total of 81 participants’ data were collected for the within subjects design study. Participants were instructed to perform the task as quickly and accurately as possible according to traditional Fitts’ testing procedures. Movement time, error rate, and throughput for each input technology were calculated.
Additionally, no standards exist for equating user experience with Fitts’ measures such as movement time, throughput, and error count. To test the hypothesis that a user’s experience can be predicted using Fitts’ measures of movement time, throughput and error count, an ease of use rating using a 5-point scale for each input type was collected from each participant. The calculated Mean Opinion Scores (MOS) were regressed on Fitts’ measures of movement time, throughput, and error count to understand the extent to which they can predict a user’s subjective rating.

Shunt capacitors are often added in transmission networks at suitable locations to improve the voltage profile. In this thesis, the transmission system in Arizona is considered as a test bed. Many shunt capacitors already exist in the Arizona transmission system and more are planned to be added. Addition of these shunt capacitors may create resonance conditions in response to harmonic voltages and currents. Such resonance, if it occurs, may create problematic issues in the system. It is main objective of this thesis to identify potential problematic effects that could occur after placing new shunt capacitors at selected buses in the Arizona network. Part of the objective is to create a systematic plan for avoidance of resonance issues.
For this study, a method of capacitance scan is proposed. The bus admittance matrix is used as a model of the networked transmission system. The calculations on the admittance matrix were done using Matlab. The test bed is the actual transmission system in Arizona; however, for proprietary reasons, bus names are masked in the thesis copy in-tended for the public domain. The admittance matrix was obtained from data using the PowerWorld Simulator after equivalencing the 2016 summer peak load (planning case). The full Western Electricity Coordinating Council (WECC) system data were used. The equivalencing procedure retains only the Arizona portion of the WECC.
The capacitor scan results for single capacitor placement and multiple capacitor placement cases are presented. Problematic cases are identified in the form of ‘forbidden response. The harmonic voltage impact of known sources of harmonics, mainly large scale HVDC sources, is also presented.
Specific key results for the study indicated include:
• The forbidden zones obtained as per the IEEE 519 standard indicates the bus 10 to be the most problematic bus.
• The forbidden zones also indicate that switching values for the switched shunt capacitor (if used) at bus 3 should be should be considered carefully to avoid resonance condition from existing.
• The highest sensitivity of 0.0033 per unit for HVDC sources of harmonics was observed at bus 7 when all the HVDC sources were active at the same time.

Image segmentation is of great importance and value in many applications. In computer vision, image segmentation is the tool and process of locating objects and boundaries within images. The segmentation result may provide more meaningful image data. Generally, there are two fundamental image segmentation algorithms: discontinuity and similarity. The idea behind discontinuity is locating the abrupt changes in intensity of images, as are often seen in edges or boundaries. Similarity subdivides an image into regions that fit the pre-defined criteria. The algorithm utilized in this thesis is the second category.
This study addresses the problem of particle image segmentation by measuring the similarity between a sampled region and an adjacent region, based on Bhattacharyya distance and an image feature extraction technique that uses distribution of local binary patterns and pattern contrasts. A boundary smoothing process is developed to improve the accuracy of the segmentation. The novel particle image segmentation algorithm is tested using four different cases of particle image velocimetry (PIV) images. The obtained experimental results of segmentations provide partitioning of the objects within 10 percent error rate. Ground-truth segmentation data, which are manually segmented image from each case, are used to calculate the error rate of the segmentations.

Engineering education can provide students with the tools to address complex, multidisciplinary grand challenge problems in sustainable and global contexts. However, engineering education faces several challenges, including low diversity percentages, high attrition rates, and the need to better engage and prepare students for the role of a modern engineer. These challenges can be addressed by integrating sustainability grand challenges into engineering curriculum.
Two main strategies have emerged for integrating sustainability grand challenges. In the stand-alone course method, engineering programs establish one or two distinct courses that address sustainability grand challenges in depth. In the module method, engineering programs integrate sustainability grand challenges throughout existing courses. Neither method has been assessed in the literature.
This thesis aimed to develop sustainability modules, to create methods for evaluating the modules’ effectiveness on student cognitive and affective outcomes, to create methods for evaluating students’ cumulative sustainability knowledge, and to evaluate the stand-alone course method to integrate sustainability grand challenges into engineering curricula via active and experiential learning.
The Sustainable Metrics Module for teaching sustainability concepts and engaging and motivating diverse sets of students revealed that the activity portion of the module had the greatest impact on learning outcome retention.
The Game Design Module addressed methods for assessing student mastery of course content with student-developed games indicated that using board game design improved student performance and increased student satisfaction.
Evaluation of senior design capstone projects via novel comprehensive rubric to assess sustainability learned over students’ curriculum revealed that students’ performance is primarily driven by their instructor’s expectations. The rubric provided a universal tool for assessing students’ sustainability knowledge and could also be applied to sustainability-focused projects.
With this in mind, engineering educators should pursue modules that connect sustainability grand challenges to engineering concepts, because student performance improves and students report higher satisfaction. Instructors should utilize pedagogies that engage diverse students and impact concept retention, such as active and experiential learning. When evaluating the impact of sustainability in the curriculum, innovative assessment methods should be employed to understand student mastery and application of course concepts and the impacts that topics and experiences have on student satisfaction.

Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.