Filtering by
- All Subjects: artificial intelligence
- Resource Type: Text
The pandemic that hit in 2020 has boosted the growth of online learning that involves the booming of Massive Open Online Course (MOOC). To support this situation, it will be helpful to have tools that can help students in choosing between the different courses and can help instructors to understand what the students need. One of those tools is an online course ratings predictor. Using the predictor, online course instructors can learn the qualities that majority course takers deem as important, and thus they can adjust their lesson plans to fit those qualities. Meanwhile, students will be able to use it to help them in choosing the course to take by comparing the ratings. This research aims to find the best way to predict the rating of online courses using machine learning (ML). To create the ML model, different combinations of the length of the course, the number of materials it contains, the price of the course, the number of students taking the course, the course’s difficulty level, the usage of jargons or technical terms in the course description, the course’s instructors’ rating, the number of reviews the instructors got, and the number of classes the instructors have created on the same platform are used as the inputs. Meanwhile, the output of the model would be the average rating of a course. Data from 350 courses are used for this model, where 280 of them are used for training, 35 for testing, and the last 35 for validation. After trying out different machine learning models, wide neural networks model constantly gives the best training results while the medium tree model gives the best testing results. However, further research needs to be conducted as none of the results are not accurate, with 0.51 R-squared test result for the tree model.
Human team members show a remarkable ability to infer the state of their partners and anticipate their needs and actions. Prior research demonstrates that an artificial system can make some predictions accurately concerning artificial agents. This study investigated whether an artificial system could generate a robust Theory of Mind of human teammates. An urban search and rescue (USAR) task environment was developed to elicit human teamwork and evaluate inference and prediction about team members by software agents and humans. The task varied team members’ roles and skills, types of task synchronization and interdependence, task risk and reward, completeness of mission planning, and information asymmetry. The task was implemented in MinecraftTM and applied in a study of 64 teams, each with three remotely distributed members. An evaluation of six Artificial Social Intelligences (ASI) and several human observers addressed the accuracy with which each predicted team performance, inferred experimentally manipulated knowledge of team members, and predicted member actions. All agents performed above chance; humans slightly outperformed ASI agents on some tasks and significantly outperformed ASI agents on others; no one ASI agent reliably outperformed the others; and the accuracy of ASI agents and human observers improved rapidly though modestly during the brief trials.
This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as size, labeling, and format of the training data. The findings suggest that reducing the training dataset size can lead to a decrease in image coherence, indicating that AI models get worse as the training dataset gets smaller. Moreover, the study makes surprising discoveries regarding AI image generation models that are trained on highly varied datasets. In addition, the study involves a survey in which people were asked to rate the subjective realism of the generated images on a scale ranging from 1 to 5 as well as sorting the images into their respective classes. The findings of this study emphasize the importance of considering dataset variance and size as a critical aspect of improving image generation models as well as the implications of using AI technology in the future.