Artificial consciousness is a matter of time. WILL ARTIFICIAL INTELLIGENCE EVER BECOME CONSCIOUS?
The hot topic of artificial intelligence developing a consciousness as a pre-condition of sentient machines is the subject of speculative debates and fake AI start-ups profiteering.
A layer of EQ or emotional quotient on top of AI is promoted as affective computing, with its commercial promoters. For an AI program to demonstrate emotions (to become an emotional AI program), it needs to see emotions, hear our voices, and feel human anxiety. For this, « emotional AI must be able to extract emotional data from humans through conversational means by eye-tracking, skin response, voice, and written word analysis, brain activity via EEG, facial mapping, etc ».
Below we give a commentary that it is a senseless enterprise to create any emotion AI or sentient machines possessing power of sense perception, experiencing sensation, feelings, affect and emotion.
There is no real AI that can read, see, listen, remember, self-learn, and understand the affective states, emotions, feelings and intentions of humans.
Today’s AI/ML/DL senses nothing, feels nothing, learns nothing, knows nothing, and good for nothing, not mentioning reading human emotions, as recognizing the mood and the feeling or emotion in real life, camera images or in photos.
Emotion AI is said to detect and interpret human emotional signals through text (natural language processing and sentiment analysis), audio (voice emotion AI), video (facial movement analysis, gait analysis and physiological signals), verbal information (tonality, vocal emphasis and speech rhythm) or combinations thereof.
Affective AI is supposed to measure your reactions and your responses, as voice patterns and facial expressions, such as pitch and tonality, expressiveness, attention, disgust, smile… It looks at characteristics such as eye movement, gesture, body language, to measure the levels of surprise, confusion and amusement, and whether you are distracted, drowsy, stressed or unfocused.
To read emotions, you need to have a prior knowledge/experience about universal emotional expressions, such as happiness, sadness, surprise, fear, anger, disgust, and contempt and different combinations thereof .
Second, you need to integrate information from facial expressions, body movement and gestures, speech, as well as detect contactlessly heart rate and breathing signals.
Third, you need to know how to measure basic parameters, characteristics, values or metrics of all sorts and kinds of emotional variables.
As for now, a fake emotion AI is commercialized by some fake AI companies, such as Affectiva deploying its quasi-emotion AI technology all over the world with clients like Disney, Coca Cola, Kelloggs, Samsung and Google.
It is said that about 28% of the fortune global 500 companies use such fake technology to estimate how effective their ads are.
Moreover, one start-up promises to identify pulmonary, neurological, cardiovascular and other diseases and chronic medical conditions in the voice, via big data predictive analytics.
In all, such a technology is to be applied for hiring talent, personality traits analysis, advertising, marketing, event planning, predicting purchase behavior, finding a dating partner, security, trustfulness, etc.