Technology is reshaping patterns of human development and social interaction. It matters who is in control of these tools. In this regard, the field of psychology is both more perilous and more needed than ever.
A handful of key technologies and their convergence is driving the fourth industrial revolution (4IR). These technologies include blockchain (mostly used in cryptocurrencies), 5G connectivity (the so-called infrastructural backbone of the 4IR), 3D printing (also called additive manufacturing), the internet of things (the ability to connect the physical and cyber worlds through sensors), and artificial intelligence (AI).
AI is the major technology of the 4IR, but it is not new. It is AI’s enhanced capabilities that are behind much of the anticipated disruption of the 4IR to the world of work, production and broader daily life.
What is AI? It is a multipronged discipline premised on the ability to make computer programs able to mimic human intelligence as much as possible. AI has been around for decades, going through periods of seemingly boundless optimism and followed by “winters” (characterised by substantially diminished funding and interest from financiers and broader society).
We are once again experiencing a spring period. But what seems to set this period apart is the combination of unprecedented computational power and breakthroughs in machine learning, which is the ability of AI programs to synthesise new information autonomously.
In the AI community, there is a broad distinction between those who are scientifically minded and those who are practically minded. The former take inspiration from biology and seek to integrate as many brain-like features into AI as possible (including the ability to “learn” autonomously from growing accumulations of data), whereas those in the latter camp are pragmatists. They are mainly concerned with output, and having AI act according to what it has been trained to do.
Alan Turing, the innovator behind the invention of the computer, had designated that an AI could be said to have passed the humanity test once it could successfully convince a human it was interacting with that it, too, was a fellow human.
Cognitivists vs behaviourists
To psychologists, this echoes debates between behaviourists and cognitivists in the 20th century.
Behaviourists such as JB Watson and BF Skinner argued that our internal lives were essentially irrelevant, and people’s behaviour (output) could be changed using a series of reward-and-punishment systems, which were elicited by certain types of desired and undesired behaviours. Cognitivists, on the other hand, argued that the brain is the seat of consciousness and that people are not mere response-driven machines.
AI has always been linked to psychology. The founder of modern AI, and the person who gave it its name, John McCarthy, was trained as a cognitive psychologist in addition to computer science.
It was he who convened the 1956 Dartmouth University working group that developed the main research areas of AI that continue to define the field to this day (that is, natural language processing, machine learning and neural networks).
The goal for some AI developers is to mimic the human brain. Some of AI’s early pioneers were convinced that this could be achieved, and reasonably soon. Famously, Herbert Simon, who was also in the 1956 Dartmouth workshop and went on to win the Nobel prize in economics in 1978 (in recognition of his work, which introduced psychological concepts to the field of economics) was convinced that AI would be “doing any work that any man can do” within two decades.
Today’s psychology is a multifaceted field that seeks to understand and predict human behaviour, with the objective of influencing human behaviour. This makes it a powerful field. As a result, psychology’s toolkit is sometimes used less than honestly.
The accumulation of digital data empowers these untoward actors. For example, the Cambridge Analytica firm had an army of data scientists who were working with psychology models and building profiles of individuals to nudge their voting behaviour in a certain direction.
In her 2019 memoir, Targeted: My Inside Story of Cambridge Analytica and How Trump, Brexit and Facebook Broke Democracy, Brittany Kaiser details how the firm held a database of between 2 000 and 5 000 individual data points for every individual in the United States over the age of 18 (representing about 240-million individuals). Kaiser’s disclosure corroborates how psychological profiling, through data mining and subtle manipulation, can be used to influence behaviour to achieve a particular outcome.
How will AI, and the 4IR more broadly, affect the field both in terms of study and practice?
One of the key mandates of psychology is to understand humans from the in-utero stage to old age. This subdiscipline is called developmental psychology. It is concerned with changes in behaviour and the internal world of the individual as they navigate the world.
Many of the early founders of this field, including Erik Erikson, took note of the role of the environment as the individual grows and attains new competencies. Since the latter part of the 20th century, the environment has become increasingly defined by technology — digital has especially picked up since the 1990s. Some of these, notably social media and online niche communities, have taken on a socialising function.
The field of developmental psychology is currently grappling with the effect of technologies on those who grow up in our technologically saturated world. Peer-reviewed research indicates that the early use of social media has caused behavioural differences between the current generation of adolescents and its predecessors in areas such as emotional reaction and opinion formation.
These developments require a rethinking of what constitutes “normal” growth patterns. Social media companies have developed algorithms with young users in mind, which has also led to positive outcomes. For example, in August Facebook announced plans to develop an AI algorithm that can detect when users have misrepresented their age and are under the age of 13 to restrict their usage in the mainstream app.
For its part, social psychology is keen on group dynamics. In other words, it is interested in the influence of social pressures on individuals. The field has to take stock of — and anticipate — a social world in which individuals will be interacting with non-human interlocutors in the 4IR.
With so much of human behaviour dictated by perception, even the expectation that one may be interacting with a highly developed robot may fundamentally reshape our social behaviour. What implications does this carry for phenomena such as trust, cooperation, relationships and effort? Will people retreat into their “tribes” or will a common humanity emerge in the wake of this new “other”?
We cannot know for sure, but the track record so far indicates that it matters greatly who is behind the algorithms and what informs their intentions. Paradoxically, the level and sophistication of data captured about people means that those with the datasets are able to split people apart and further drive them into groups.
News recommendations and page membership recommendations on social media are one such way. In turn, one of the keen insights from social psychology, through experiments such as the Stanford prison experiment and the Milgram experiment, is how easy it is to influence people in group contexts.
The onset of the 4IR also entails the loss (or expectation of loss) of jobs, potentially in the millions. Already, people are working in what is called the gig economy: temporary, short-term jobs with diminished or non-existent prospects of full-time, permanent employment. The millennial generation, having known mainly the gig economy as the norm, may have a different conception of the economy.
Nevertheless, the psychological effects of scarcity and economic anxiety are increasing in the wake of large-scale automation. This has been one of the key factors that has pushed people further into these so-called tribes, as they seek to scapegoat other groups (other ethnicities, immigrants, or “the elite”) for their economic misfortunes. Psychology, through its industrial and psychotherapeutic branches, has a mission to repair and enhance the social fabric in the face of these challenges.
On the other hand, as AI based on models of human brains become more refined, the field will have new vistas for understanding people open to it. One of the key hurdles that psychology researchers encounter, and rightly so, is the difficulty of conducting experiments on people. AI can be used for the purposes of replication and making reasonable conclusions without making use of human subjects.
But this, too, carries some ethical ambiguity, including using the information sourced from millions of people whose data has (usually unknowingly) fed such databases. Importantly, however, actors in the political and commercial spheres are already making use of these. Psychology associations, civil society and the wider social science community should evaluate the risks and lead in setting standards in this new frontier.