Overview video for Kaleidoscope

Role: Technical Artist, Programmer
Tools Used: Unity3D High Definition Render Pipeline, Intel RealSense
Collaborators: Byungju Lee, Weizhang Lee, Vicky Lin, Jehan Sandhu, Amber Zheng
Awards: SXSW 2020 Innovation Award Finalist
Our client for the project is the director of Carnegie Mellon University's Askwith Kenner Global Languages and Cultures Room. The room is inside CMU's new Tepper Quadrangle, and its intent is to serve as an interactive and immersive learning space on campus for language and culture. Our team was given the task of creating an experience that would reflect the intention behind the room and encourage students and other members of the CMU faculty to enter the room. One aspect that our client stressed for us to include in our experience was the idea of "cultural competency," or the ability to effectively interact with people of other cultures with sensitivity.
The Kaleidoscope experience is an exploration in how everyone carries some level of implicit cultural bias when encountering people they don't know, and aims to make our guests aware of this fact as a first step towards learning cultural competency. An obscured point cloud of a stranger and a recording of them saying a greeting is presented to the guest, after which the guest is prompted to guess character traits about this stranger. Once the guest has created an assumption-based persona of this stranger, they are shown a visual comparison of their assumptions and the answers actually provided by this stranger to see how their assumptions may have been incorrect and how they can learn from that.
The Kaleidoscope experience is an exploration about how people carry some level of implicit cultural bias when encountering new people, and aims to make guests aware of that fact to take a first step towards learning cultural competency. The experience shows guests a point cloud rendering of a stranger and plays audio recordings of their responses to several "get to know you" questions. The guest must then answer a series of questions that leads them to create an assumption-based persona, after which a visual comparison of their answers and the correct responses appears.
Displaying the "Stranger" - RealSense Point Cloud Rendering
One major technical challenge of this project was developing an effective way to represent the "stranger" in the experience using data from an Intel RealSense. Our team wanted a point cloud effect that struck a balance between being visually pleasing and fitting with the overall aesthetic of the experience, but also have enough clarity for the guest to properly see the stranger's appearance. I started with the Intel Realsense's default renderer, but it did not offer much flexibility for creating custom effects.

Initial test of the point cloud obscuring effect using the Intel renderer.

Next, I tested using Keijiro's RSVFX project to stream data points into Unity's VFX Graph. This allowed for a much more robust method of displaying the point cloud, now being rendered as a GPU-based particle effect that also can be modified with other post-processing effects. 
I created a VFX graph effect that exposes an "obscuration" factor to the Unity editor and scripts to allow us to scale between a drastic visual effect to a clear image of the stranger. This way, we were able to change the appearance of the effect throughout the experience to make the visuals more dynamic.

Examples of point clouds using VFX Graph, and one obscure to clear effect demo

We ultimately settled on a cube-like appearance for the point cloud, to match the overall spatial/modular look of the rest of the experience. When Kaleidoscope is in an idle state in between guests, it will cycle between randomly selected point clouds of possible strangers in the experience and randomly shift their level of obscurity.
During the experience, the point cloud's brightness will also flash slightly to match the audio from the stranger speaking.
Kaleidoscope is installed on Carnegie Mellon University's campus, located in the Tepper Quadrangle building in the Askwith-Kenner Global Languages room. It is set up in the immersion area, which is a space set up with three short-throw projectors to display on each interior wall and an HTC Vive for interaction.
Back to Top