The problems addressed by the philosophy of mind arise anew when we consider the possibility of consciousness in artificial and non-biological systems. In this thesis I adapt traditional theories of mind and theories meaning in natural language to the new problems posed by these non-human systems, attempting answers to the questions: Can a given system think? Can a given system have subjective experiences? Can a given system have intentionality? Together these capture most of the typical features of consciousness discussed in the literature. Hence, answers to these questions have the potential to form a basis for a robust and practical future theory of consciousness in non-human systems, and I argue that the broad classes of functionalist and emergentist theories of mind are those worth considering more in the literature. The answers given in this thesis through the lenses of these two classes of theories are not exclusive, and may interact with or be supportive of one another. The functionalist account tells us that a system can be thinking, sentient, and intentional just in case it exhibits the correct structure, and the emergentist account tells us how this structure might arise from previous systems of the right complexity. What these necessary structures or complexities are depends on which functionalist and emergentist accounts we accept, and so this thesis also addresses some of the possibilities allowed for by certain variants of these theories. What we shall obtain, in the end, are some prima facie reasons for believing that certain systems can be conscious in the ways described above.