We are lucky enough to be living through the world’s greatest era of innovation – the era of intelligent technologies. That is of course a great privilege. But it is also a huge, and deeply daunting responsibility.After steam power in the eighteenth century, steel and rail, and electrification and chemicals in the nineteenth century, and after cars and petrochemicals changed the world again in the twentieth century, and following the explosion into our daily lives of information technology, we are now in the modern world’s sixth great wave of innovation.
But the new innovation wave is different to those that have gone before. Farnham Jahanian, President of Carnegie Mellon University in the US, has described how “the scope, scale and ubiquity” of the disruptions we will face across our economy and our society is “truly unprecedented”.
Indeed, it seems clear that this new period of change will not just transform economies and societies more profoundly than the other revolutionary advancements did in the past, it will transform the very notion of society. It will raise searching, fundamental questions about what it is to be a human being.
The late physicist Stephen Hawking famously declared that artificial intelligence “is likely to be either the best or the worst thing ever to happen to humanity”. At its worst, Hawking said, human beings, limited by slow biological evolution, could quickly be superseded by intelligent machines.
While all of this sounds like dystopian science fiction, we have already seen examples of algorithms amplifying human prejudice and of machine learning controversially linked to the erosion of democracy. Moreover, cutting-edge automation has been implicated in some recent, horrific air disasters. We can only imagine the dreadful potential of autonomous weapons.
As A.I. by its very nature develops rapidly, Hawking said, there is a real risk that humans could be side-lined, left behind – even subjugated - by thinking machines. AI’s rise, Hawking said with perhaps a justifiable degree of melodrama, “could spell the end of the human race”.
I much prefer the brighter, more optimistic predictions associated with this new era of intelligent technologies. These lay emphasis on the ways in which such technologies could change the world for the better. As Professor Hawking has said: “Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty.”
The positive possibilities are endless and I remain extremely optimistic. Why? Because of universities.
It is my sincere belief that where we end up at the zenith of this new wave of innovation -- the dystopian nightmare or something a little closer to utopia -- will be to a very large extent, determined by universities. More specifically, the outcome will be determined by the brilliant thinkers that power universities. And emphatically, by those who work not just in science and technology fields, but by those scholars in the arts, humanities and social sciences.
Perhaps institutionalised by over twenty years working at Times Higher Education, or THE, the world’s leading source of data and insights into global universities, I have long been convinced that universities are the most important institutions in society. They have the remarkable potential to make the world a better place, through teaching the next generation of society’s leaders, to advancing knowledge and powering the economy.
But I think it is fair to suggest that the role of universities in society today -- and in the coming decades as we grapple with the implications of thinking machines, of conscious robots -- will be more important than ever.
The technological revolution asks fundamental questions about universities’ role as society’s critic, and society’s conscience. About their timeless mission to seek the truth and to speak the truth. And make no mistake, as technology changes society, and challenges our very understanding of what it is to be human, the role of humanities scholars and social scientists, will be utterly vital.
Like Stephen Hawking before him, Carnegie Mellon’s President Farnam Jahanian, recently declared in an article for the World Economic Forum: “There is no guarantee that technology will automatically benefit humanity.” He argued that here “perhaps, lies the greatest obligation for institutions of higher education in the digital revolution.”
However, he made clear that the obligation fell on multiple fields and across the disciplinary boundaries.He went on to state that “It is up to us universities to provide the ethicists, artists, and philosophers who can point the way; the policy experts and economists who can draw the map; and the cognitive scientists and sociologists who help ensure the destination is designed for people as well as machines.”
Kate Devlin, a senior lecturer in social and cultural artificial intelligence at King’s College London, writing in Times Higher Education last month wrote powerfully about the vital role of the arts, humanities and social sciences in the digital revolution. She wrote that the “contemplative, thorough and peer-reviewed environment of the university is a place to draw together the strands that feed into AI: not just computer science but philosophy, law, and design as well as the science and technology studies and the media theory that can all contribute to a more inclusive, thoughtful and ethical AI.” She also pointed out the mantra that whereas Silicon Valley has adopted the mantra that tech companies were all about “moving fast and breaking things”, universities “are not about moving fast and breaking things; universities are about the critical analysis, the gathering of evidence and the sound methodology, the bigger picture. There is strength in moving slowly and fixing things,”.
The sixth great wave of technological innovation, I hope and predict, should provide an unprecedented opportunity for those who think deeply about how our society works, and what it is to be human.