My greatest fear: AI like ChatGPT replacing me eventually.
2 min readThe ascent of unregulated content creation through generative AI, illustrated by ChatGPT, presents a more significant truth threat than past election “fake news.” The era of “seeing is believing” is fading as we embrace this new phase.
Presently, media spotlights generative AI for its unique feats like crafting a Drake-style song or generating a surprising image of Pope Francis. Although these stories are intriguing, the media pays less attention to AI’s potential impact on job displacement.
In my view, it appears that the creators of ChatGPT at OpenAI are engaging in gain-of-function research on humanity, resembling a dystopian scenario. Their argument that someone would eventually invent it for better control and development is both absurd and flawed. Here’s a preview of the future:
During his State of the Hack talk at the RSA security conference in San Francisco, NSA cybersecurity director Rob Joyce mentioned that non-English speaking hackers from Russia will no longer be creating poorly crafted emails to target your employees. This was part of a broader warning about the challenges in discerning between what is real and what is artificial.
In NSA cybersecurity director Rob Joyce’s State of the Hack address at the RSA security conference in San Francisco, he highlighted that adversaries, including nation-states and criminals, are experimenting with ChatGPT-style generation. They aim to produce native-language English content that not only makes sense but also passes the “sniff test.” These capabilities are already present today.
In the realm of politics, another consideration arises: as generative AI advances to a point where distinguishing real from fake becomes challenging, a non-technical issue emerges. How will the public react to future scandalous revelations, akin to Trump’s “grab-’em-by-the-you-know-what” comment? If deepfakes become indistinguishable from reality, could such sensational claims be deemed too extraordinary to believe? Candidates might even routinely attribute controversial statements to deepfakes, knowing it would be nearly impossible to prove otherwise.