2023-12-28
The work is licensed under
a Creative
Commons Attribution-NonCommercial-ShareAlike 4.0 International
License.
AI (artificial intelligence) is nothing new. Alan Turing, the father of computer science, derived the Turing Test to qualify whether a machine is intelligent. What has changed since Turing’s time (the early 1950s)? It is the availability of processing power, the digitization of human artifacts (writing, drawing, painting, etc.) and storage capacity.
Moore’s Law predicts that transistor size halves every 18 months. This roughly translates to the doubling of capabilities every 18 months. Science and engineering have managed to keep pace with Moore’s Law so far. This means the trajectory of machine intelligence will continue even without any actual innovation in terms of the approach.
The Age of AI makes information even more available to individual compared to the Age of Information (search engines). This is because a search engine searches for relevant web pages based on the query, and responds with links to web pages. A generative AI, on the other hand, retrieves, processes, and directly answer a question in a prompt. The inquirer no longer needs to read the content of many web pages to extract the relevant information.
This attribute, alone, changes the fundamental interaction between humans and the Internet. Information searches are much more efficient with a generative AI. However, this also means the responses are more susceptible to distortion due to the intermediate steps. Verification becomes even more crucial in any serious research.
At this point, the most common type of generative AI engine (LLM, Large Language Model) can mimic reasoning. This is based on how the AI is trained on the description of reasoning. In other words, reasoning is done by matching how words are used in structures of sentences, paragraphs, etc. For topics that have a lot of published text, an LLM can reasonably mimic human reasoning. However, on topics that are sparsely published, an LLM lacks the ability to perform reasoning because it takes a lot of training samples for an LLM machine learning engine to extract patterns.
Jobs in STEM fields can potentially be impacted in the Age of AI. This is because many jobs in STEM fields are “intellectual” work, involving the capabilities to think critically, analyze, research, problem-solve. Some of these capabilities overlap with what AI can do.
There are several ways that AI can influence the availability and qualification of STEM jobs.
The value of knowledge, by itself, has diminished ever since the invention of paper, the press, the Internet, and now generative AI. Knowledge can be looked up and acquired on a mobile device just by verbally asking an AI assistant.
The application of knowledge is where current LLM AI struggles. In part, this has to do with the fast changing nature of STEM subject matters that limits the availability of training samples available to LLM training. Even more importantly, application of knowledge in STEM subject area require logic rigor (precision and completeness). This is somewhat ironic considering computers generally are considered much more precise than humans.
Note that many STEM subjects benefit from AI techniques. However, the applications of such techniques originate from careful human design and engineering.