With growing dependency on artificial rationalization, human reasoning and decision-making is under continuous suppression. Where machine learning and deep learning tends to empower machines to carry out functions and break assigned tasks into easier ones, it nevertheless fastens the route towards a world order that is likely to be in absolute control of Artificial Intelligence(AI). Does it indicate cutting humans entirely out of the loop?
This deliberate submission of power to machines has some assured repercussions in the realm of strategic stability which rational actors must take into consideration. The simulation of human cognition – the capacity of human mind to learn, interpret and reason- in machines is what artificial intelligence refers to. It eventually stands as a defining feature of modern societies. By the enhanced use of algorithms, AI optimizes the ability for collecting avast range of data whether numeric or categorical in the form of big data to measure the information and derive results accordingly. Thus, Artificial Intelligence is itself emerging as a vast technological industry for creating intelligent machines. Such machines would be capable of independent decision-making based on the level of subjectivity conceded to AI. This subjectivity defines the rationale of decisions made by machines. Along with enhanced precision and prompt responses, it suggests that over-reliance on AI could probably take the shape of absolute control.
Artificial Intelligence (AI) in the International Arena (IA) acts as a modifier of global affairs and challenges whether bilateral or multilateral. Additionally, it is transforming military strategies with its significant precision and speed via contracting the action-reaction loop. As such AI is being developed for assessing and responding to problems with minimum human supervision. Which, the other way, predicts an autonomous crisis escalation with minimal or no chances of containment. One such example is the development of lethal autonomous weapon systems (LAWS). Analyzing the broad view of global affairs under the predominant existence of nuclear weapons, robotic and computational technology is so far effectively assisting states in maintaining the safety and security mechanisms of nuclear and fissile material/data. It is evident from the events of the cold war era that other than human error, technological error within the realm of nuclear strategy could easily escalate towards nuclear war fighting or its accidental use with a catastrophic domino effect. Despite the precision, speed and human-like reasoning, machines are likely to lack a considerable situational variation with respect to risk assessment of actions and their reactions. The reliance on artificial rationalization means increased unpredictability and competition that resultantly means greater strategic instability around the globe.
Strategic stability demands a credence among nuclear weapon states that their adversaries would not likely be able to undermine their nuclear deterrence by any means. This surety is crucial in the case of South Asia. Comprising of three nuclear weapon states with inter-state rivalries, South Asia demands a stable strategic environment which requires a considerable level of risk assessment and management. Machine learning and big data analysis are some already adopted strategies in South Asia as in other parts of the world to predict and track an adversary’s aggressive posturing. Although, it is technically challenging for a state to be able to locate and target all of its adversary’s dispersed nuclear weapons and delivery systems during crisis-time, AI maximizes this detection and tracking ability. Hence, it could provide a win-win strategic advantage to one party over the other. This likelihood convinces states to pursue greater reliance on advanced AI-supported defence technology while greatly increasing the chances of a possible malfunction or misinterpretation of command.
Strategic stability of South Asia is already fragile. The prediction dynamics of this strategic stability after AI inception has long been a bone of contention. It can be traced that China’s New Generation Artificial Intelligence Development Plan and its AI advancements within strategic realm could lead to more aggression stemming from India’s hegemonic designs. Resultantly, Pakistan’s nuclear deterrence would be reasonably undermined. This can lead to a mutual fog of war in terms of strategic vulnerabilities and disparities. Moreover, the cyber-vulnerabilities and cyber-breach events in South Asia already foretell the emerging uncertainty currently undermining strategic stability in the region.
Furthermore, the prevalence of AI within nuclear realm elevates the risks of an accidental or unauthorized use of nuclear weapons which as an outcome could trigger escalation. Incorporating AI within command and control mechanisms of nuclear weapons states would possibly increase the risk of a misinformed and irrevocable weapons launch. China in pursuit of advanced AI, a bellicose India and balancing Pakistan (vis-a-vis India) would all vulnerable to such misadventures inflicted by an over and uncontrolled reliance on AI. In this regard, keeping the strategic stability of South Asia intact is a much more challenging matter than anywhere else on the globe.
Being an alluring domain, Artificial Intelligence has become a necessary evil which based on the above discussed risks still poses an existential threat to humanity. It presses states around the world and particularly in South Asia as a technologically nascent yet rapidly advancing region to compete in such a way that it may eventually turn into their absolute submission to AI. Another alarming aspect is that ultimately human intelligence adheres to the necessity of the human security perspective whereas AI, if not programmed correctly, may not recognize or emphasize the human safety or security enough. Instead of relinquishing total control and submitting to machines intentionally which could be real risk attracting phenomenon, Artificial intelligence must be employed to assist and empower human cognition to better respond to the collective and individual strategic challenges.
No comments:
Post a Comment