Teaching is a challenging career that carries various challenges, some of which go beyond the educator’s control and influence their ability to teach. Through the Arizona State University (ASU) Barrett's Honors College, seminars and discussions centered in collaboration and learning, resulted in student's introduction to ideas of what it means to “truly” teach from both a student and educator perspective. Teaching is more than an exchange of information as it requires a human connection. While most educators agree that connection is vital, there are still challenges in the classroom that generationally impact families. Daoism, an ancient Chinese philosophy, discusses concepts such as mindfulness, leadership, and introspection. Educators can use Daoist philosophy as a tool to reflect on and develop their ability to teach with vulnerability, openness, and interconnectedness. From a philosophical standpoint, Lao Tzu (Daoist leader) explains the importance of shifting perspectives to what the individual can control: themselves. Teachers must create a classroom dynamic that is not only engaging but also provides students a sense of autonomy over their education. Shifting the dynamic from teacher centered to student centered places the education in the students’ hands and alleviates some pressure from the teacher. Embedding Daoist philosophy into the classroom can be seamless as it can already be seen through Social Emotional Learning, Culturally Relevant Curriculum, and Deep Learning.
This work addresses the following four problems: (i) Will a blockage occur in the near future? (ii) When will this blockage occur? (iii) What is the type of the blockage? And (iv) what is the direction of the moving blockage? The proposed solution utilizes deep neural networks (DNN) as well as non-machine learning (ML) algorithms. At the heart of the proposed method is identification of special patterns of received signal and sensory data before the blockage occurs (\textit{pre-blockage signatures}) and to infer future blockages utilizing these signatures. To evaluate the proposed approach, first real-world datasets are built for both in-band mmWave system and LiDAR-aided in mmWave systems based on the DeepSense 6G structure. In particular, for in-band mmWave system, two real-world datasets are constructed -- one for indoor scenario and the other for outdoor scenario. Then DNN models are developed to proactively predict the incoming blockages for both scenarios. For LiDAR-aided blockage prediction, a large-scale real-world dataset that includes co-existing LiDAR and mmWave communication measurements is constructed for outdoor scenarios. Then, an efficient LiDAR data denoising (static cluster removal) algorithm is designed to clear the dataset noise. Finally, a non-ML method and a DNN model that proactively predict dynamic link blockages are developed. Experiments using in-band mmWave datasets show that, the proposed approach can successfully predict the occurrence of future dynamic blockages (up to 5 s) with more than 80% accuracy (indoor scenario). Further, for the outdoor scenario with highly-mobile vehicular blockages, the proposed model can predict the exact time of the future blockage with less than 100 ms error for blockages happening within the future 600 ms. Further, our proposed method can predict the size and moving direction of the blockages. For the co-existing LiDAR and mmWave real-world dataset, our LiDAR-aided approach is shown to achieve above 95% accuracy in predicting blockages occurring within 100 ms and more than 80% prediction accuracy for blockages occurring within one second. Further, for the outdoor scenario with highly-mobile vehicular blockages, the proposed model can predict the exact time of the future blockage with less than 150 ms error for blockages happening within one second. In addition, our method achieves above 92% accuracy to classify the type of blockages and above 90% accuracy predicting the blockage moving direction. The proposed solutions can potentially provide an order of magnitude saving in the network latency, thereby highlighting a promising approach for addressing the blockage challenges in mmWave/sub-THz networks.
Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized images of breast tissue samples, called fine-needle aspirates. Breast cancer diagnosis typically involves a combination of mammography, ultrasound, and biopsy. However, machine learning algorithms can assist in the detection and diagnosis of breast cancer by analyzing large amounts of data and identifying patterns that may not be discernible to the human eye. By using these algorithms, healthcare professionals can potentially detect breast cancer at an earlier stage, leading to more effective treatment and better patient outcomes. The results showed that the gradient boosting classifier performed the best, achieving an accuracy of 96% on the test set. This indicates that this algorithm can be a useful tool for healthcare professionals in the early detection and diagnosis of breast cancer, potentially leading to improved patient outcomes.
The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data which is very useful for solving problems in natural language processing and computer vision. There are other approaches to solving these problems which have been implemented in the past (i.e., convolutional neural networks and recurrent neural networks), but these architectures introduce the issue of the vanishing gradient problem when an input becomes too long (which essentially means the network loses its memory and halts learning) and have a slow training time in general. The transformer architecture’s features enable a much better “memory” and a faster training time, which makes it a more optimal architecture in solving problems. Most of this project will be spent producing a survey that captures the current state of research on the transformer, and any background material to understand it. First, I will do a keyword search of the most well cited and up-to-date peer reviewed publications on transformers to understand them conceptually. Next, I will investigate any necessary programming frameworks that will be required to implement the architecture. I will use this to implement a simplified version of the architecture or follow an easy to use guide or tutorial in implementing the architecture. Once the programming aspect of the architecture is understood, I will then Implement a transformer based on the academic paper “Attention is All You Need”. I will then slightly tweak this model using my understanding of the architecture to improve performance. Once finished, the details (i.e., successes, failures, process and inner workings) of the implementation will be evaluated and reported, as well as the fundamental concepts surveyed. The motivation behind this project is to explore the rapidly growing area of AI algorithms, and the transformer algorithm in particular was chosen because it is a major milestone for engineering with AI and software. Since their introduction, transformers have provided a very effective way of solving natural language processing, which has allowed any related applications to succeed with high speed while maintaining accuracy. Since then, this type of model can be applied to more cutting edge natural language processing applications, such as extracting semantic information from a text description and generating an image to satisfy it.
This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as size, labeling, and format of the training data. The findings suggest that reducing the training dataset size can lead to a decrease in image coherence, indicating that AI models get worse as the training dataset gets smaller. Moreover, the study makes surprising discoveries regarding AI image generation models that are trained on highly varied datasets. In addition, the study involves a survey in which people were asked to rate the subjective realism of the generated images on a scale ranging from 1 to 5 as well as sorting the images into their respective classes. The findings of this study emphasize the importance of considering dataset variance and size as a critical aspect of improving image generation models as well as the implications of using AI technology in the future.