At the successful first TEDxFrankfurt event “Crossroad and Crosslinks. Digital Society”, I had the chance to interview three of the numerous exciting speakers.
While all of them talked about the impact that digitalization has on our society and the possibilities it generates, every speaker added a completely surprising and new aspect to this TEDx event.
From voice analytics that can decode our emotional state to eye exams via mobile phones and Big Data that has the power to improve every aspect of our lives….I’m happy to share with you ideas worth spreading!
The first Interview is with Dan Emodi, who is a 18-year technology-marketing veteran of Cisco, Sheer Networks (acquired by Cisco), Orbotech and Comverse. Now, he is leading Beyond Verbal’s marketing operations.
Dan started his career as a sous-chef before moving to financial journalism, investment banking, technology pre-sales, and finally to marketing. Combining his diverse career background Dan has developed an alternative point of view to the traditional mix of technology, engineering and marketing.
– Dan Emodi – „Emotion is probably the most important, non- existent interface between humans and machines“
Beyond Verbal – The emotions analytics company, focuses on vocal intonations in order to decode human emotions, feelings and attitudes. What are the goals of BeyondVerbal?
I believe that understanding ourselves is not only one of the biggest frontiers in psychology but also in everything we do. It touches almost every aspect of our human being, yet we still know very little about this topic. Getting a machine to understand and decode human emotions, would have a huge impact on our daily lives. The immense Quantified-Self trend shows people’s interest in measuring their wellbeing, performance etc. Verbal intonation must be a part of these measures, because it is such a significant factor. Our wellbeing and health, our relationship towards ourselves and others are only a few examples of the areas which will be transformed once we learn to decode our emotions fast and precisely.
In your Tedx talk in Frankfurt you said that its not what we say that matters, but how we say it. Many others have focused on body language, like the former FBI agent Joe Navarro. Why do you focus on the voice?
Psychology studies of the last 40 years have revelaed that „more than 90% of the emotional impact“ has nothing to do with the words we choose. Words are a poor indicator for human emotions and attitudes as they are very easy to manipulate. Nevertheless, corporations still focus on the written word and customer questionnaires to understand what their customers feel towards their products and services and what their needs are.
As for body language analysis vs. vocal intonation analysis: We believe that both methods are perfectly viable but serve very different use cases. For example, when we are talking, it’s hard for facial recognition technology to correctly decode our emotions as our mouth keeps changing the elements these technologies look for to identify our emotions. Also vocal recognition doesn’t need a camera. Our app moodies (editor’s note: available on the iTunes store and google play) needs only 20 seconds to analyze your emotions while you speak. On the other hand, we need voice to do what we do so if you’re playing the “silent type” we – well – have nothing to analyze. This is when facial recognition and body language analysis will step in. Both techniques are more powerful if used together. Most of all though vocal intonation is another MAJOR way for our body to express our emotions and thus is invaluable and therefore should be considered when analysing emotions.
Did working at and with BeyondVerbal changed your perception about emotions and do you use this knowledge in your daily life? (I’m wondering if you can tell in waht emotional state I am in right now..?)
I think I can hear that you are sincerely interested in this topic (editor’s note: correct analysis). In general, some people are more in tune with their feelings than others. Most people though have a base capability to decode their feelings and those of others. (An exemption would be people with certain syndroms like Aspergers of course). Generally, we humans analyze people all the time by observing their body language, their voice and their words.
The interesting thing is, that while we are most of the time very accurate in decoding the emotions of others, we do a pretty poor job when it comes to our own emotions. When I was giving speeches in the past, I wanted to come across as humorous, creative and a bit flirtatious with the audience. In the first two minutes on stage, I was very aware of the emotions I conveyed, but shortly after that my awareness was gone and I started to loose my audience. Later on when I watched myself on video and got honest feedback from a friend, I learned that I was coming across as dry! The opposite of what I wanted! I didn’t recognize that.
As for myself, I can say that I became more aware of my feelings and the different nuances. This holds also true for my awareness towards the feelings of others. This involves a lot of training though. Actors are doing this sort of training a lot.
How come we are less accurate in making sense of our own emotions? I’ve always thought that decoding others is way more difficult…
We don’t have that much information about ourselves when we communicate with others. Our cognition is already preoccupied with constructing our sentences and choosing words while also trying to understand the person we are communication with – all at the same time. We (usually) don’t see ourselves and have only an inner image of our appearance during an interaction.
In addition, when we speak, we hear our own voice in two different ways instead of only one. The first way is through vibrating sound waves that hit your ear drum. This is also how you hear the voice of others. The second way is through vibrations inside your skull set. Those vibrations also travel through your bony skull and again let your ear drum vibrate. As they travel further through the bone they spread out and thereby distort your perception.
If you listen to your recordet voice, it might sound very different from how you perceive it when you speak directly.
Did you choose Frankfurt deliberately as a TEDx host and location to present BeyondVerbal and the importance of vocal intonation technology?
We were approached by our contractor and were instantly on board, because we love the concept of „Ideas worth spreading“ and I believe BeyondVerbal is such an idea. Besides, our main customers come from the US, the Eastern Pacific region and Europe. Within Europe, Germany is our major hub and holds many potential partners.
What major challenges do you see lying ahead of the „Digital Society“?
I think the movie „HER“ discloses a very interesting topic and challenge within our digital society – lonelyness! For us as socially wired creatures, it’s already a challenge as more and more of our analogue interactions are replaced by digital communication. It becomes constantly easier to replace in-person communication with its digital counterpart. Of course the very same technology that makes us lonelier, also helps us to get in touch with people, we normally would not have met. It holds many opportunities and benefits, but we should not forget that we are programmed for in-person interactions and relationships. Those positive interactions and relationships keep us healthy.
Can BeyondVerbal and vocal intonation technology help people alleviate feelings of prolonged loneliness or even depression?
In order to tackle a problem, we need to understand it first. I believe the first step would be to identify and measure loneliness and then find out when we are lonely. Only then can we start to understand the root-cause: WHAT is it that makes us lonely. The first step would be to identify and measure what makes someone lonely or depressed. The reason for someone feeling lonely can be very complex and entail many factors. Therefore, a diverse set of analysis needs to be conducted when tackling this issue. I think BeyondVerbal technology can play a MAJOR role in identifying, understanding and ultimately solving loneliness. In the near future we will see that our personal devices will be able to combine a vast amount of data about us and tell us, what causes certain issues like loneliness and even predict it. BeyondVerbal will be an important component, although certainly not the only one.
Are you planning to partner up with other companies and develop a holistic technology in order to tackle these issues?
It won’t be long before the input from many sources is brought together to create such a holistic technology. I hope we continue to cooperate with other companies to make this possible.
What would you like people to take away from your speech at the TEDxFFM?
That’s a good question…
I would like the audience to think: „Emotions really do matter!“
I would also like them to realize that emotion is probably the most important, non- existent interface between humans and machines. It’s a very critical interface which has the power to enrich our lifes significantly.
Machines already understand so much, but not our emotions. Enabling a machine to do that holds many possibilities.
Many thanks for these exciting insights, Dan and good luck with your endeavors at BeyondVerbal.
*Quote taken from Albert Mehrabian’s „Silent Messages“.
Dan has been interviewed by Marina Zayats, Global Shapers Hub Frankfurt
Weitere Informationen zu Events in der Region:
- Der Rhein-Main Startups Eventkalender mit allen Gründerterminen in der Region
- StartupDigest für FrankfurtRheinMain, ein wöchentlicher Newsletter mit den wichtigsten Tech-Startup-Events
WERDE TEIL DER STARTUP-COMMUNITY IN FRANKFURT/RHEIN-MAIN!
Schenke uns Dein „gefällt mir“ bei Facebook, follow uns bei Twitter, abonniere unseren RSS-Feed und nutze die Sharing-Funktionen unter unseren Beiträgen. Du kannst auch einen Gastbeitrag schreiben oder Informationen über Dein Startup einreichen.
Startup-Buch des Monats:
Controlling in Start-up-Unternehmen von Jürgen Diehm