Creating Automated Virtual Humans
Analytics
88 views ◎57 downloads ⇓
Abstract
Virtual Humans (VHs) are highly efficient and effective task oriented tools for various social and collaborative environments. VHs have the ability to replicate human like verbal (speech) and non-verbal (gestures, facial expressions) interactive behaviors. They are presently being used in training, testing, improving communication skills and practice, with the added advantages of; fidelity of presentation; ability to portray a much wider range of personality; appearance; emotions; behavior; and the potential to be available anywhere, anytime, and at a low cost. My research objective is to develop intelligent VHs with the ability to portray emotions and generate behaviors based on their history, education, personal experiences and cognitive state of mind. Emotions described as a state of feeling in the sense of an affect are often intertwined with mood, temperament, personality, disposition, and motivation. Over the years various agent architectures have been developed using computational models of emotions to drive the behavior of VHs. A common deficit seen across these agent architectures is their inability to incorporate one's personal experiences, history, and education while computing the decision making process for behavior generation. To this effect an agent architecture, called Culturally Modified Agent Architecture (CMAA), was developed with the ability to generate autonomous VHs whose cognitive state of mind is driven by three main factors: 1) agent's belief (past history and personal experiences), 2) agent's personality and the 3) mood of the agent. The two main components driving the CMAA architecture are- the appraisal model (responsible for appraising the events based on agent's belief) and the emotion model (responsible for generating a set of emotions based on appraisal variables and PAD (Pleasure-Arousal-Dominance) rules of personality and emotions).To test the feasibility of developing autonomous intelligent VHs using the proposed CMAA agent architecture, a VH prototype was implemented within the clinical setting. A VH prototype termed as VSP (Virtual Standardized Patient) portraying an OEF (Operation Enduring Freedom)/OIF (Operation Iraqi Freedom)/OND (Operation New Dawn) Veteran, exhibiting symptoms associated with mild TBI (Traumatic Brain Injury), was developed as a screening tool for evaluation purposes. Currently two different versions of the VSP exist- Version 1, where the VSPs behavior and emotions are scripted by the experts based on observations of a typical screening of mild TBI patient, and Version 2, where the behavior and emotions are automated and driven by the CMAA agent architecture. Evaluation studies were designed to 1) test the validity and believability of the VSP portraying symptoms of mild TBI and 2) test the effectiveness of using the VSP as a training tool to practice diagnostic evaluation and improve communication between a patient and a provider within a clinical setting. This dissertation presents an in-depth review of the development and social impact of VHs across various domains. It describes the working of the CMAA agent architecture and its structural components for creating personalized, autonomous, intelligent agents. The dissertation then describes the design and development of the VSP as a diagnostic training tool for evaluation of mild TBI (Traumatic Brain Injury). Lastly an in-depth description of the evaluation studies along with the results are presented. The dissertation concludes by highlighting the potential possibilities of future work.