Symtha Koganti, Enerel Munkhbaatar
July 30th
Artificial Intelligence (AI) and Machine Learning (ML) has revolutionized the performance of modern technology and the way we interact with it. A recent example would be, ChatGPT, a generative AI technology that uses its abilities to generate responses similar to a human which could range from writing essays to writing code in various languages. Another familiar example would be AI-generated art. Both of these examples have something in common: they’ve sparked discussion regarding the ethics surrounding AI usage. Though such discussion has grown stronger in recent years, it isn’t particularly novel and has existed for nearly as long as AI itself has.
One of the many products of the 20th century was the rise of science fiction--science fiction that served to introduce the general populace and many scientists to the idea of artificial beings and automatons with the capacity to think for themselves. From this familiarity and the invention of the programmable digital computer arose the Turing test, a method to test if a machine could seamlessly replicate a human’s behavior. With both the Turing test and the phrase “artificial intelligence” being coined at a Dartmouth conference in this decade, the 1950s is easily the beginning of AI.
This decade launches AI into the spotlight and funding is heaped upon the research of the field, brought to a halt only by one thing: the lack of computational power. Ironically, it is only after the spotlight fades that computer capabilities increase and major milestones of the field are achieved. John Hopfield and David Rumelhart brought “deep learning” techniques that
let
computers learn from experience to new popularity, Waseda University built the music-playing robot WABOT-2, Richard Wallace created the chatbot A.L.I.C.E, and Deep Blue, a chess-playing computer, won against a world champion. To say the least, AI had grown rapidly over the course of a few decades and entered the 21st century with great acceptance from large companies.
Up until the late 1970s, Machine Learning had been considered as AI, but it is now considered the subset of AI. It is the concept of ML without being specifically programmed. The concept of ML was first laid down by pioneer computer scientist, Alan Turing. He proposed the question “Can machines think?” which then on later influenced The Turing Test. The Turing Test states that if a machine is able to converse with a human without being detected as a machine, it then has demonstrated human intelligence.
Machine learning “learns” or recognizes patterns by analyzing large amounts of data. What makes ChatGPT or other chatbots possible is the field of natural language processing. Rather than using data and numbers, this field is trained based on patterns between language or texts. Neural networks have also been utilized for many industries. In an artificial neural network, there are nodes that are modeled based off of the human neuron. It has the ability to process pictures just as a brain would, and is capable of detecting whether a picture contains a boat or a bike. Nowadays, businesses can use Machine Learning to detect fraud, recommend algorithms, analyze pictures, create an autonomous car, and more.
Needless to say, AI will continue to progress. Whether through algorithms or simply waiting for the capability of computer storage to increase (the inevitable progress declared by Moore’s Law) until it allows for further development, artificial intelligence seems to have few limits on applications in the current age of “big data''. In the short term, language appears to be an area of focus through ChatGPT and other chatbots as is cybernetics to mesh biology and technology for relief in Parkinson’s disease symptoms and more. Goals further in the future--often related to bringing about intelligence and logic on par with or greater than a human’s--are more subject to the conversation of ethics and the limits of human supervision. Nevertheless, wherever the field of AI progresses, there is no doubt that it will continue to change the way we interact with technology.
Sources:
Comentários