Skip to main content
It has long been recognized that facial expressions, body posture/gestures and vocal parameters play an important role in human communication and the implicit signalling of emotion. Recent advances in low cost computer vision and... more
It has long been recognized that facial expressions, body posture/gestures and vocal parameters play an important role in human communication and the implicit signalling of emotion. Recent advances in low cost computer vision and behavioral sensing technologies can now be applied to the process of making meaningful inferences as to user state when a person interacts with a computational device. Effective use of this additive information could serve to promote human interaction with virtual human (VH) agents that may enhance diagnostic assessment. The same technology could also be leveraged to improve engagement in teletherapy approaches between remote patients and care providers. This paper will focus on our current research in these areas within the DARPA-funded “Detection and Computational Analysis of Psychological Signals” project, with specific attention to the SimSensei application use case. SimSensei is a virtual human interaction platform that is able to sense and interpret r...
We present one of the first applications of virtual humans in Augmented Reality (AR), which allows young adults with Autism Spectrum Disorder (ASD) the opportunity to practice job interviews. It uses the Magic Leap's AR hardware... more
We present one of the first applications of virtual humans in Augmented Reality (AR), which allows young adults with Autism Spectrum Disorder (ASD) the opportunity to practice job interviews. It uses the Magic Leap's AR hardware sensors to provide users with immediate feedback on six different metrics, including eye gaze, blink rate and head orientation. The system provides two characters, with three conversational modes each. Ported from an existing desktop application, the main development lessons learned were: 1) provide users with navigation instructions in the user interface, 2) avoid dark colors as they are rendered transparently, 3) use dynamic gaze so characters maintain eye contact with the user, 4) use hardware sensors like eye gaze to provide user feedback, and 5) use surface detection to place characters dynamically in the world.
Virtual humans are computer‐generated characters designed to look and behave like real people. Studies have shown that virtual humans can mimic many of the social effects that one finds in human‐human interactions such as creating... more
Virtual humans are computer‐generated characters designed to look and behave like real people. Studies have shown that virtual humans can mimic many of the social effects that one finds in human‐human interactions such as creating rapport, and people respond to virtual humans in ways that are similar to how they respond to real people. We believe that virtual humans represent a new metaphor for interacting with computers, one in which working with a computer becomes much like interacting with a person and this can bring social elements to the interaction that are not easily supported with conventional interfaces. We present two systems that embody these ideas. The first, the twins are virtual docents in the Museum of Science, Boston, designed to engage visitors and raise their awareness and knowledge of science. The second, Sim‐Coach, uses an empathetic virtual human to provide veterans and their families with information about PTSD and depression.
Various new technologies and aiding instruments are always being introduced for the betterment of the challenged. This project focuses on aiding the mute in expressing their views and ideas in a much efficient and effective manner thereby... more
Various new technologies and aiding instruments are always being introduced for the betterment of the challenged. This project focuses on aiding the mute in expressing their views and ideas in a much efficient and effective manner thereby creating their own place in this world. The proposed system focuses on using various gestures traced into texts which could in turn be transformed into speech. The gesture identification and mapping is performed by the Kinect device, which is found to cost effective and reliable. A suitable text to speech convertor is used to translate the texts generated from Kinect into a speech. The proposed system though cannot be applied to man-to-man conversation owing to the hardware complexities, but could find itself very much of use under addressing environments such as auditoriums, classrooms.
In order for Virtual Environments (VE) to be efficiently developed in the areas of clinical psychology and neuropsychology, a number of basic theoretical and pragmatic issues need to be considered. The current status of VE's in... more
In order for Virtual Environments (VE) to be efficiently developed in the areas of clinical psychology and neuropsychology, a number of basic theoretical and pragmatic issues need to be considered. The current status of VE's in these fields, while provocative, is limited by the small number of controlled studies that have been reported which apply this technology to clinical populations. This is to be expected considering it's relatively recent development, expense, and the lack of familiarity with the technology by mainstream researchers in these fields. In spite of this, some work has emerged which can begin to provide a basic foundation of knowledge which could be useful for guiding future research efforts. Although much of the work does not involve the use of fully immersive head mounted displays (HMD's), studies reporting PC-based flatscreen approaches are providing valuable information on issues necessary for the reasonable and measured development of VE/mental health applications. In light of this, the following review will focus on basic issues that we see as important for the development of both HMD and non-HMD VE applications for clinical psychology, neuropsychological assessment, and cognitive rehabilitation. These basic issues are discussed in terms of decision-making for choosing to develop and apply a VE for a mental health application. The chapter covers the issues involved with choosing a VE approach over already existing methods, deciding on the "fit" between a VE approach and the clinical population, level of presence, navigation factors, side effects, generalization, and general methodological and data analysis concerns.
Research Interests:
Virtual environment (VE) technology is increasingly being recognized as a useful medium for the study, assessment, and rehabilitation of cognitive processes and functional abilities. The capacity of VE technology to create dynamic... more
Virtual environment (VE) technology is increasingly being recognized as a useful medium for the study, assessment, and rehabilitation of cognitive processes and functional abilities. The capacity of VE technology to create dynamic three-dimensional (3D) stimulus environments, within which all behavioral responding can be recorded, offers clinical assessment and rehabilitation options that are not available using traditional neuropsychological methods. This work
The objective of this study was to describe the nature of the attention deficits in children with NF1 in comparison with typically developed children using the Virtual Classroom (VC). Twenty nine NF1 children and 25 age-and gender-matched... more
The objective of this study was to describe the nature of the attention deficits in children with NF1 in comparison with typically developed children using the Virtual Classroom (VC). Twenty nine NF1 children and 25 age-and gender-matched controls, aged 8 -16 were assessed in a VC. Parent ratings on the Conners' Parent Rating Scales-Revised; Long (CPRS-R:L) questionnaire was used to
Over the last decade there has been growing recognition of the potential value of virtual reality and game technology for creating a new generation of tools for advancing rehabilitation, training and exercise activities. However, until... more
Over the last decade there has been growing recognition of the potential value of virtual reality and game technology for creating a new generation of tools for advancing rehabilitation, training and exercise activities. However, until recently the only way people could interact with digital games and virtual reality simulations, was by using relatively constrained gamepad, joystick and keyboard interface devices. Thus, rather than promoting physical activity, these modes of interaction encourage a more sedentary approach to playing games, typically while seated on the couch or in front of a desk. More complex and expensive motion tracking systems enable immersive interactions but are only available at restricted locations and are not readily available in the home setting. Recent advances in video game technology have fueled a proliferation of low-cost devices that can sense the user’s motion. This paper will present and discuss three potential applications of the new depth-sensing camera technology from PrimeSense and Microsoft Kinect. The paper will outline the technology underlying the sensor, the development of our open source middleware allowing developers to make applications, and provide examples of applications that enhance interaction within virtual environments and game-based training/rehabilitation tools. The PrimeSense or Kinect sensors, along with open source middleware, provide markerless full-body tracking on a conventional PC using a single plug and play USB sensor. This technology provides a fully articulated skeleton that digitizes the user’s body pose and directly quantizes their movements in real time without encumbering the user with tracking devices or markers. We have explored the integration of the depth sensing technology and middleware within three applications: 1) virtual environments, 2) gesture controlled PC games, 3) a game developed to target specific movements for rehabilitation. The benefits of implementing this technology in these three areas demonstrate the potential to provide needed applications for modern-day warfighters.
Abstract. Virtual reality (VR) can be considered an embodied technology whose potential is wider than the simple reproduction of real worlds. By designing meaningful embodied activities, VR may be used to facilitate cognitive modelling... more
Abstract. Virtual reality (VR) can be considered an embodied technology whose potential is wider than the simple reproduction of real worlds. By designing meaningful embodied activities, VR may be used to facilitate cognitive modelling and change. However, the diffusion of this ...
The rising rates, high prevalence, and adverse consequences of obesity and diabetes call for new approaches to the complex behaviors needed to prevent and manage these conditions. Virtual reality (VR) technologies, which provide... more
The rising rates, high prevalence, and adverse consequences of obesity and diabetes call for new approaches to the complex behaviors needed to prevent and manage these conditions. Virtual reality (VR) technologies, which provide controllable, multisensory, interactive three-dimensional (3D) stimulus environments, are a potentially valuable means of engaging patients in interventions that foster more healthful eating and physical activity patterns. Furthermore, the capacity of VR technologies to motivate, record, and measure human performance represents a novel and useful modality for conducting research. This article summarizes background information and discussions for a joint July 2010 National Institutes of Health — Department of Defense workshop entitled Virtual Reality Technologies for Research and Education in Obesity and Diabetes. The workshop explored the research potential of VR technologies as tools for behavioral and neuroscience studies in diabetes and obesity, and the p...

And 58 more