Exposing the Rest of You


Exploring the underlying emotions within my conversations.

What I Did

For this exploration into the rest of me I chose to look into a key aspect of my life; conversation with the people around me and conversations with myself. A large majority of my day to day time, and a huge portion of my job, is spent in conversation with people, with me averaging major 'in-depth' conversations with around 10 people each day, having an average of 15 ongoing text conversations with different people at any given time, and countless minor interactions with people around me. 

The conversations I have, and the way I approach them, are some of the most important aspects of my daily life, but I feel that being engaged in so much conversation can lead to an undercurrent of feeling that I am missing. How is my tone different than what I intend? How do I perform differently for different people? How is my underlying emotional state conflicting or coinciding with what I am saying?



Audio of Conversations + Voice-Based Emotion Recognition


To explore this I chose to track and record my personal audio over a number of different daily interactions, and then funnel that data through intonational based emotion recognition. 



How I Approached It

I started by using a Zoom H2n audio recorder to begin collecting a stream of conversational audio throughout my day, while also logging who I spoke to at the time. I did this throughout a number of days until I had a strong enough data set to begin exploring. 

Last year I began playing with a number of emotion recognition softwares and API's in a very minor way, and had found a few to work very well with analyzing voice. So, I began to run this audio collection through one of them, Beyond Verbal, to analyze my underlying emotional state while talking. 

This is where I ran into a major technical problem. Despite testing and using a strong microphone, much of the audio was filled with too much noise to get a strong analysis from Beyond Verbal, so I had to look to alternatives.

I needed something that could pick up my voice in a strong way, store it, and analyze it emotionally. Luckily Beyond Verbal has developed and expanded its mobile app, Moodies, to become a strong tool for a constant flow of analytics.


With the Moodies app I could record a constant flow of personal audio, which was emotionally analyzed, logged, and time stamped, every 20 seconds, allowing for a huge collection of insights when paired alongside the log of who I was interacting with and why. 


What I Discovered and Moving Forward

While I definitely made the switch to the Moodies app too late in the week to gain the deep level of insight and correlation that I wanted, I still began to see some surprising trends. One being that while I may sound pleasant and approachable, in many daily conversations the recognition was picking up high levels of self control, which it described as seeking to mask deep emotions while moving away from a place of weakness. It also revealed the opposite in ways that were surprisingly correct when I thought about the feedback. One being that while I thought I was being very neutral within a particular conversation, I was actually showing moments of intense admiration. 

Moving forward, I would first and foremost want to get a larger dataset from Moodies in order to dive deeper into the conflicts between what I think I am feeling and portraying and what is actually underneath. A broader goal is to explore how technology reacting on an emotional level can be expanded as an input just the same as a knob or button. 

Skylar JessenComment