Storytelling using VirtualReality
In Café Media we have different installations to experience and analyse experiences in the Virtual Reality. Virtual Reality (VR) is a computer-generated interactive virtual environment. VR can be experienced using so-called head-mounted displays. VR and HMD are only in its infancy and concepts how to engage users in these environments need to be identified. Furthermore notions about what an acceptable quality for the user is, and what is consist of need to be researched as well. One concept to identify this is to ask the users explicitly, however, especially in the situation when the viewer is wearing a HMD, this may be inconvenient, as they need to take off the device. Therefore, physiological measures may become handy.
Simple wearable consumer grade devices are available and deliver a wide range of possible data, and its analysis. These devices can measure e.g. heart rate, skin conductance, or body temperature. All these measures are indicative of the emotional status of its user, and therefore can be used to quantify the above concepts. Within our CafeMedia facilities, we have different VR-devices to explore perception and engagement concepts and its parameters. HMD include for example HTCs’ Vive or the Occulus Rift. Furthermore, a few physiological sensors are available in our facilities to facilitate the research.
3D QoE
Quality assessment on emerging stereoscopic 3D media is still in an early stage when compared to 2D image/video quality assessment. New characteristics (e.g. specific artifact perception, visual strain, perceived depth) and various requisite factors in stereoscopic systems (e.g. system-introduced crosstalk, screen size, viewing position, scene content, camera baseline) should be taken into account. However, most subjective quality assessments on stereoscopic 3D have been focused on coding artifacts and their influence on the perceived viewing experience. Thus, demand on sharing datasets on perceived visual quality under various requisite factors of stereoscopic systems is increasing. Therefore, we publish three visual quality datasets, including crosstalk stereoscopic (relates crosstalk perception to scene content, crosstalk level and camera baseline), crosstalk auto-stereoscopic (relates crosstalk perception to scene content, crosstalk level and viewing position) and QoE stereoscopic (relates QoE to scene content, camera baseline, screen size and viewing position).