Week 3 [16.11-22.11] Can machines read our emotions?
Computer systems and machines are becoming the foundation in every domain of the human life. Up until now there has been a strict, practical division between what is the human and what is the machine domain and in a way the two stood in contradiction to each other. Human
means
emotional, irrational, intuitive, social. Machine-like, on contrary, is purely
logical. But is it really?
Computer engineers and scientists are looking for ways to incorporate human features into machines, especially in the domain of robotics. I am very curious to know where are the limits for this. In the Human Centered Multimedia laboratory a group of designers is simulating human-like behaviour of robotic agents, exploring how these can understand the social signs and emotions and react to them just like a human would do. They have created eight
different expressional designs for the emotions of Anger, Sadness,
Fear and Joy - these emotions are separated into Body Movement, Sound and Eye Color ans, subsequently, incorporated into robotic design and tested against humans.
Please read the article:
https://www.aldebaran.com/sites/aldebaran/files/casestudy_hri_augsburg_0.pdf
and answer the following questions:
1. Are human emotions really that simple that they can be put apart into logical elements and easily learned by a machine?
2. Can we do the same with our intuition? What could the design of intuition look like?
3. If our "human components" are that easy to put into logical elements, how are we different form machines?