GPT-4 and the Uncanny Valley – AI at an Inflection Point
On March 14, 2023, OpenAI, the company that gave the world ChatGPT, made a low-key announcement that it had released GPT-4. [1] In a slight deviation from most technology companies, OpenAI’s understated technical press release announcement nevertheless boasted of a number of noteworthy achievements:
- GPT-4 is a multimodal model that accepts image and text inputs, while emitting text outputs
- GPT-4 passed numerous human-level performance tests such as the bar exam
- OpenAI had been intensively and iteratively testing and training GPT-4
OpenAI said that ChatGPT, powered by GPT-4, will result in "best-ever results ... on factuality, steerability, and refusing to go outside of guardrails." [2]
A major win for this version of the GPT ‘engine’ − GPT-4 is sometimes described as the ‘engine’ that powers ChatGPT [3] − is the introduction of safety and ethics. This is still a work in progress, but GPT-4 has been trained with Reinforcement Learning from Human Feedback (RLHF) to help the engine recognize boundaries around questions posed to it that might make the engine provide responses that are unethical, unsafe, abusive, fraudulent and generally violate OpenAI's Usage Policies. [4]
The promise. Companies were quick to test and deploy solutions with GPT 4: Microsoft launched its Security Copilot assistant on GPT-4 [5] to help cybersecurity professionals identify breaches and analyze data; a HustleGPT challenge was set up for starting businesses, [6] other companies used the tool to analyze what happened in the Silicon Valley Bank collapse. [7] The possibilities seem endless, with some startups even starting to spend less on human coders because GPT-4 can code. [8]
The pause. However, there is a growing call to slow the development of artificial intelligence (AI) engines as a whole. A New York Times journalist called his interaction with ChatGPT on GPT-4 “dizzy and vertiginous,” [9] and a growing number of significant technology luminaries have signed an open letter from the Future of Life Institute to demand that all AI labs “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” [10]
The posit. The development and announcement of new advanced AI engines seem to have struck a nerve within the technology community – GPT-4 appears to mark an inflection point for the industry. Could it be that this engine has finally reached the point where the Turing Test [11] crosses the Uncanny Valley? Are we at a point where GPT-4 demonstrates that we now have an AI engine where an AI program can converse and communicate with another human being without being detected as a machine (the Turing Test [12])? And have we reached a point where we are comfortable expressing the unsettling feeling that this similarity to human behavior is somewhat realistic yet does not quite convince us fully (the Uncanny Valley [13])? Have we entered into an Uncanny Turing Valley?
The pundit. Established in 2021, the Fair Tech Institute (FTI) is Access Partnership's think tank that develops and provides substantive research into the myriad ways where technology, business, government, and good governance intersect. Our focus is to provide strong evidence-based research and insight into questions around technology and governance. Our mission is to provide strong thought leadership, new ideas, and well-considered approaches to digital opportunities and challenges our world faces today. We closely monitor tech, regulation, and policy developments across the world.
References
[1] OpenAI Research on ChatGPT
[2] OpenAI Research on ChatGPT
[3] The Guardian: What is GPT4?
[5] Live Mint: How Microsoft uses GPT4 to launch cybersecurity assistant
[6] Fortune: OpenAI's HustleGPT Challenge
[7] The media autopsy of a bank run
[8] Startups Are Already Using GPT-4 to Spend Less on Human Coders
[9] The New York Times: GPT-4 is Exciting and Scary
[10] future of life: Pause Giant AI Experiments: An Open Letter