In Oceania at the present day, Science, in the old sense, has almost ceased to exist. In Newspeak there is no word for “Science”. The empirical method of thought, on which all the scientific achievements of the past were founded, is opposed to the most fundamental principles of Ingsoc. And even technological progress only happens when its products can in some way be used for the diminution of human liberty. (Orwell, 1977)
Scientific advancement is inevitable. Humankind is constantly striving towards exploring new methods, concepts, and possibilities to utilise all goods provided by nature for our own benefit. The immense range of man-made tools, from the most primitive wedge to today’s everyday electronic devices, originates from the human desire to create and invent, which is commonly summarised by the umbrella term ‘science’. Hence, ever since we mastered the ability of making fire and carving stones into sharp objects, we started to live in a constant symbiosis with science – implementing ‘technology’, which in its essence translates to the ‘cunning of hand(craft)’. This relationship with science has fundamentally influenced the human life in every sense, from becoming sedentary and governing collective coexistence, to new means of production, labour, communication, medicine, transportation, and warfare, to modifying behaviour, the sentiment of belonging, and social interaction. What started as a symbiosis, became an obsessive dependency, as modern technology performs a subtle but all the more domineering power over our lives today.
In George Orwell’s dystopian depiction of ‘1984’, a future full of oppression, the authority over technology plays a decisive role in executing control over the people. By obtaining the knowledge of technology and the ability to control it, a superior entity of power arises and with it, the capability of use, misuse, and abuse. However, the technological tools themselves, from the hand axe to nuclear energy, are neither good or evil in their nature. Not the empty core of the object, but the human component of ‘motive’ through the intentional application of instruments and machineries, shape a constructive or destructive consequential outcome (Fromm, 2011). Ethical consideration come into play, in order to internationally regulate the potential misuse of technologies, reaching from the ban of biological weapons first discussed in 1972 (UNTS, 1972), the prohibition of anti-personal mines (UNTS, 1997), and the forbiddance of cluster munition (UNTC, 2008). These international treaties aim to limit the implementation of destructive incentives and the central power of dominant political-economy in general (Chomsky, 2015), as well as adjust the moral standards of warfare in particular.
Today, we are faced with innovative advancements, often referred as the fourth industrial revolution or ‘Industry 4.0’, consisting of developments in the field of information, network and communication technology, big data analysis, cyber technology, robotics, and so-called ‘artificial intelligence’ (European Parliament, 2015). These trends entail a change in a multitude of socio-economic and political realities.
This paper will expose the future impacts of robotics and ‘artificial’ forms of intelligence on the labour market and workplace, leaving room for personal reflections and remarks. The central question focusses on possible methods of personal augmentation, aiming to maintain a competitive advantage against machines. Whilst the essay will present and scrutinise prominent research on the topic, it is based on the personal experiences and contemplations of the author.
Robotics, ‘Artificial Intelligence’, and the Workplace
Robotic technology and ‘artificial intelligence’ has already started to transform workplaces and labour markets and will continue to do so in the future. Research predicts, that there will be a change in manufacturing strategies, making current means of production obsolete (Roland Berger, 2016), affecting the periphery and semi-periphery more than centres, in particular referring to areas outside the Greater South East region of the United Kingdom (Centre for Cities, 2018). This leaves us with the question how do adapt best to the new and seemingly aggravated labour circumstances. In answering this issue, two perspectives shall be presented, a general one, taking a bigger picture into account, and a personal one, presenting insight into my own case.
Speaking from an overall point of view, we – as a critically engaged society – must first understand the challenges posed by robotic innovation and ‘artificial’ forms of intelligence in order to adapt to them. Since the idea ‘artificial intelligence’ was formulated by John McCarthy in 1955, the concept has changed in appearance and applicability, creating the today’s distinction between ‘weak artificial intelligence’, meaning the simulation of intelligence by a machine, and ‘strong artificial intelligence’, defined as the computer’s ability to create self-learning processes, optimising their own programming based on experience (IBA, 2017). Hence, it is debateable if both ‘artificial’ and ‘intelligence’ are the appropriate terms to describe the phenomena. The broad field of robotics encompasses all forms machines and devices, which fulfil supporting but also life-saving tasks, in the household, entertainment, medical, military, communication, and financial branches, from using a TV-remote control during leisure time to detecting and defusing explosive devices (IBA, 2017). However broad or complex the applications and definitions may be, it is decisive to recognise that all ways of automatization, through ‘artificial intelligence’, robotics, or other forms, interfere with the human-based manufacturing, production, and developing processes of goods and services. This opens challenges and opportunities for the individual and for society as a whole.
We – as a politically emancipated community – must therefore hold decision makers, political and economic powers, as well as international organisations accountable, to act in the best interest of humankind and the environment. Opportunities are to be strengthen, while risks and challenges shall be reduced. The knowledge we have obtained from research on potential consequences of increased introduction of robotics and ‘artificial intelligence’, must be implemented beneficially for the individual and the company or government. Regarding the impact on the workplace, policy recommendations to support regions which have suffered from significant loss of employment due to automation, must be taken under serious consideration and transformed into action (Centre for Cities, 2018). It is essential to identify work sectors, which are very likely to grow in the future, such as education, healthcare and public administration, enhancing them through specified training programmes, in particular for employees in shrinking sectors, who are most affected by automation, such as manufacturing labour (Bakhshi, Downing, Osborne, Schneider, 2017). However, it is the responsibility of the community to engage political decision makers and those in power positions, in order to democratically become part of a positive transition process. Thus, awareness is being raised and brought to action, preventing potential exploitation of power, and liberating oppressed individuals and classes (Freire, 1992).
We – as a morally obliged part of mankind – must participate in setting new ethnic standards, promoting human rights, and appeal against immoral practices. If we want to compete against technology, in the workplace and elsewhere, we shall highlight our humanity. Feeling compassion, expressing warmth, selflessly caring, and love, are what makes us human. Programmes and machines might be able to mimic these fundamental human capacities, but neither will they genuinely feel and intend them, nor will they fully comprehend humanity. Whilst any robotic interaction and communication is programmed, anthropomorphism explains that humans emotionally relate to humanoid robots, despite the fact that they are unable to feel the same way about us, shedding light on our human core (Duffy, 2002; Breazeal, 2000). Hence, in the process of engaging with future challenges of technology, let us return to the core of humanity, focus on our similarities rather than differences, endorse reconciling hostile groups, and formulate a common and unifying human identity (Lederach, 2010). Behind this background, it is crucial to keep Fromm’s (2014) warning in mind, that humanity will be destroyed by their own scientific achievements, if the acquisition of self-awareness is neglected or not applied for the improvement of social coexistence.
My personal view, how to adapt best to robotic innovation, is strongly based on my own experiences and believes, which were already reflected in the general section to some extent. In striving to combine my professional background in conflict management with my academic focus on international development, I am determined to apply this knowledge in practise, resolving and preventing conflicts through fair and sustainable development initiatives. This includes the provision of political goods and safety to vulnerable or marginalised groups, the socioeconomic empowerment and inclusion of the underrepresented, and the safeguarding of peace and stability by working towards consensus rather than fuelling division.
Besides consciously following the proposed general points, to avoid harsh circumstances and a certain degree of resisting computers in the first place, I would personally aim to augment myself technically by utilising technology to support and supplement my work. In my field, development programmes and conflict prevention rely on early-warning mechanisms, indicating a potential violent uprising in the near or mid-term future. Automatization could be used by feeding various and extensive indicators to an innovative analytical system. These indicators could consist of inner-dynamics, e.g. economic statistics, socio-political developments, military expenditures, arms imports, under- and unemployment rates, and data on education, and outer-dynamics, e.g. reginal political trends, social struggles in neighbouring countries, demand for natural resources, and climate factors such as droughts. Based on scrutinising the dynamics depending on the respective degree and extent of gravity, the computer-based system could predict future incidents, enhance its results based on experience, and alter the prediction depending on suggested counter measures.
However, it is essential to point out that the human component in conflict prevention cannot be replaced. Even though the programme’s results might seem accurate, every conflict is distinct and no situation can be understood solely by relying on previous examples and factors. There is no panacea for solving or avoiding violent conflicts, nor does a ‘universal template’ for conflict analysis exists (Ramsbotham, Woodhouse, Miall, 2011). Conflicts are man-made, their reasons lie hidden behind human actions, ratio, and emotions, which require human solutions. In any case, the personal dialog between entities is indispensable, both in conflict prevention and the development work. Therefore, neither robots nor ‘artificial intelligence’ can render solutions to conflicts or implement mechanisms to enhance sustainable development.
The future of work will change due to technology and automatization. In the scenario, where the individual worker will become increasingly obsolete, unemployed, and troubled by poverty, the danger of social dissatisfaction, turbulence, and uprising increases, as those in power are more prone to abuse forms of policing, surveillance and executing control by applying technology immorally (West, 2015). Today enterprises such as Google and Facebook have already started the race for revolutionising ‘artificial intelligence’, by buying up specialised companies and initiating expert research task-forces, whilst dog-like combat robots are being tested by Boston Dynamics. The more realistic dystopian imaginations seem, the more pressing becomes the question if we are ready for future technology?
The potential of ‘artificial intelligence’ and robotics simultaneously constitutes its threat. To ensure the beneficial use of these technologies, a number of profound recommendations have been formulated, to identify short-term research priorities to enhance the general understanding of the field, introducing related legal and ethical considerations to discuss the risks of automated weapon systems, and to safeguard human privacy and data security (Russell, Dewey, Tegmark, 2015). Such proposals are to be established further, and as rights for non-combatants in war situations have been proclaimed, new rights regarding the innovation of ‘Industry 4.0’ shall be considered, starting from the already existing suggestion of ‘The Internet of Things Bills of Rights’ (European Parliament, 2016).
May we adhere to the lessons we learned from a destructive and war-torn past, and uphold humanity with dignity for our future and the generations to come.
- Bakhshi H., Downing J., Osborne M., Schneider P., (2017), “The Future of Skills, Employment in 2030”, Pearson and Nesta, London, UK.
- Breazeal C., (2000), “Sociable Machines: Expressive Social Exchange Between Humans and Robots”, Sc.D. dissertation, Department of Electrical Engineering and Computer Science, MIT.
- Centre for Cities, (2018), “Cities Outlook 2018”, Centre for Cities, January 2018, London.
- Chomsky N., (2015), “Masters of Mankind, Essays and Lectures 1969-2013”, Hamish Hamilton, Penguin Random House UK 2015.
- Duffy B. R., (2002), “Anthropomorphism and Robotics”, Media Lab Europe, Sugar House Lane, Bellevue, Dublin 8, Ireland.
- European Parliament, (2015), “Industry 4.0 Digitalisation for productivity and growth”, Briefing September 2015, European Parliament – European Union, 2015.
- European Parliament (2016), “Industry 4.0 – Study for the ITRE Committee”, DIRECTORATE GENERAL FOR INTERNAL POLICIES, POLICY DEPARTMENT A: ECONOMIC AND SCIENTIFIC POLICY, IP/A/ITRE/2015-02, PE 570.007, European Union, February 2016.
- Freire P., (1992), „Kein Abschied vom Traum einer humaneren Welt“ In: Schmidt L., Schröder S., (2016): „Entwicklungstheorien. Klassiker, Kritik und Alternativen“ Mandelbaum Verlag, S. 392-410, Vienna, Austria.
- Fromm E., (2011), “Anatomie der menschlichen Destruktivität”, 23. ed, Rowohlt Taschenbuch Verlag, Hamburg.
- Fromm E., (2014), “Die Pathologie der Normalität zur Wissenschaft vom Menschen”, 5. Auflage 2014, Ullstein.
- IBA (International Bar Association), (2017), “Artificial Intelligence and Robotics and Their Impact on the Workplace”, IBA Global Employment Institute, April 2017.
- Lederach J. P., (2010), „Building Peace, Sustainable Reconciliation in Divided Societies“, United States Institute of Peace, Washington DC, USA.
- Orwell G., (1977), “1984”, Signet Classics, Penguin Group, New York, USA.
- Ramsbotham O., Woodhouse T., Miall H., (2011), “Contemporary Conflict Resolution”, 3rd ed., Polity Press, Cambridge UK.
- Roland Berger, (2016), “Think Act, Beyond Mainstream: The Industry 4.0 transition quantified. How the fourth industrial revolution is reshuffling the economic, social and industrial model”, Roland Berger GmbH, April 2016, Munich, Germany.
- Russell S., Dewey D., Tegmark M., (2015), “Research Priorities for Robust and Beneficial Artificial Intelligence”, Winter 2015, Association for the Advancement of Artificial Intelligence.
- UNTS (United Nations Treaty Series), (1972), “Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction”, 1015 UNTS 163; 11 ILM 309 (1972).
- UNTS (United Nations Treaty Series), (1997), “Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction”, 2056 UNTS 241; 36 ILM 1507 (1997).
- UNTC (United Nations Treaty Collection), (2008), “Convention on Cluster Munitions”, United Nations, Treaty Series, vol. 2688, p. 39; depositary notification C.N.776.2008. TREATIES-2 of 10 Nov 2008.
- West D. M., (2015), “What happens if robots take the jobs? The impact of emerging technologies on employment and public policy”, Center for Technology Innovations at Brookings, Governance Studies, The Brookings Institution, Washington D.C.