Simon Hutagalung
AI stands for Artificial Intelligence, which refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation (Schroer).
AI is based on the idea that machines can learn from experience, adjust to new inputs, and perform human-like tasks. AI is used in various industries, including healthcare, finance, education, and transportation, and is becoming increasingly important as technology advances. Developments and innovations in the area of AI over the years have resulted in several machine learning models and ChatGPT (GPT stands for Generative Pre-Trained Transformer) is one of them. ChatGPT is the fastest-rising machine learning model based on the GPT-3.5 architecture, which uses deep neural networks to generate human-like text based on the input it receives from users. Essentially, ChatGPT is an AI chatbot that can answer questions, provide information, and engage in conversations on a wide range of topics.
ChatGPT was launched in November 2022 – within a week, it attracted a million users, and within two months, it had 30 million active users. Interestingly, as of January 2023, ChatGPT has crossed 100 million active users. When questioned about which industries could be impacted by this kind of AI, David Nguyen, the Edson W. Spencer Chair for Innovation and Entrepreneurship at the University of Minnesota, responded, “I don’t know. Possibly all.” (Hrapsky).
Now, AI-powered chatbots like ChatGPT have largely disrupted traditional search engines providing greater convenience and accuracy. However, according to CNET, when the company used a self-designed AI engine to write a few financial articles, the articles were not free of factual errors and plagiarism and so, required significant human editing.
Moreover, AI and machine learning models like ChatGPT largely challenge data privacy and security as data, available in bulk and through unrestricted access, can be manipulated and exploited for harmful purposes. For instance, there is a constant threat of cyberattacks and the leaked data can be used for terrorist agendas, information warfare, or disruptions of industries if altered or manipulated. This is not only a concern for companies and industries but also for state actors who have been engaged in a war on terror for more than two decades.
Not only this but AI is also upending geopolitics. According to Russian President Vladimir Putin, “Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world”. Although AI has made manufacturing, transportation, technological advancements, agriculture, healthcare, etc. much more efficient, it is also forcing a shift in the approaches to national security and the architecture of modern militaries. In the coming years, countries will not only focus on economic growth through AI but also on improving national security.
Moreover, AI is not a zero-sum game. Not only the United States but China has also made trillions of dollars of investment in AI under its “Made in China 2025” plan which aims at integrating AI into 10 key industries including IT, robotics, eco-friendly automotive, and aerospace equipment, etc. by 2025. At the same time, China, especially its cyberspace agency, is worried that AI might undermine national security or even “split the country”.
China has recently formulated a draft regulation that requires Chinese tech companies to register their AI-generated products with the Cyberspace Administration of China and undergo a security assessment before products are made available to the public. This is to ensure that AI does not pave the way for “subversion of state power” or encourage violence, terrorism, extremism, and discrimination. Those who do not follow the regulations might face serious charges, must pay a penalty (between 10,000 and 100,000 yuan), and undergo a criminal investigation.
Similarly, the United States has also proposed to begin a study on how to regulate AI tools like ChatGPT given their negative implications for national security and education. The security agencies in the United States emphasize that there must be rules in place to ensure that the “AI systems are legal, effective, ethical, safe, and otherwise trustworthy”. According to President Biden, “Tech companies have a responsibility, in my view, to make sure their products are safe before making them public”. Moreover, the Center for Artificial Intelligence and Digital Policy, a U.S.-based technology ethics organization, has called upon the U.S. Federal Trade Commission to prevent OpenAI from releasing any new commercial versions of GPT-4, citing concerns of bias, deception, and risks to privacy and public safety.
So, in a broader understanding, the rise of AI and machine learning tools like ChatGPT have enormous benefits as they automate small and big industries alike but there need to be strict rules and regulations in place which neither hamper technological development nor allow terrorism, cyberattacks, or compromise national security.
No comments:
Post a Comment