Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the gd-system-plugin domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the kleo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Is CHAPTGPT the GenZ Cliff Notes? - Chum.Alum.World

Move over Cliff Notes…there’s a new “Cheater” in town. With the launch of CHATGPT- an AI Text Generative platform that is peculiarly accurate at being wildly vauge is a sector of educators who are sensing the start of something  – well artificial.  The question remains will students become prompt engineers and trade “low-code prompts” as a service to classmates who want to “skip” the notion of higher learning? Look, depending on the data sets, one can come off sounding like a native tongue jouggernaugt or a bumbling fool. It’s interesting to read over the reviews from professors who are in deep-shook mode for the potential of word fraud. Keeo in mind, it helps to be of certian statue of language to take full advantage of CHAPT-GPT similar to the text to art generators like Dalle-2.

The discourse around ChatGPT has flagged the damaging effect it might have on society, everything from the model encouraging torture and perpetuating sexism to enabling kids to cheat on their homework. You worry about the impact of AI-generated responses finding their way into the data that future chatbot tools are trained on, creating an indistinct, Ready Player One-style mush of references—a bovine slurry, churned up and fed back to us, a virus that drowns out anything new.

Article: ChatGPT’s Fluent BS Is Compelling Because Everything Is Fluent BS

The AI chatbot is trained on text from books, articles, and websites that has been “cleaned” and structured in a process called supervised learning. ChatGPT can write code, make up songs, and compose limericks and haiku. It remembers what it has written and makes careful edits upon request. It takes even the most random prompts in stride, composing stories that neatly tie competing strands together: Details that seem irrelevant in the first paragraph pay off in the last. It can tell jokes and explain why they’re funny. It can write magazine-style ledes, punchy and attention-grabbing, with cogent yet completely fabricated quotes.

In the UK, to pick just one example, a cadre of fluent bullshitters drove the country out of Europe and directly off a cliff. (“ChatGPT, write a speech about why Britain should leave the EU but fill it with arcane vocabulary and Shakespearean references.”) Post-truth, fluency is everything and bullshit is everywhere, so of course ChatGPT’s fluent bullshit feels plausible. It was bound to. It was trained on people

All of this makes playing around with ChatGPT incredibly fun, charmingly addictive, and—as someone who writes for a living—really quite worrying. But you soon start to sense a lack of depth beneath ChatGPT’s competent prose. It makes factual errors, conflating events and mixing people up. It relies heavily on tropes and cliché, and it echoes society’s worst stereotypes. Its words are superficially impressive but largely lacking in substance—ChatGPT mostly produces what The Verge has described as “fluent bullshit.”

 

 

But that kind of makes sense. ChatGPT was trained on real-world text, and the real world essentially runs on fluent bullshit. Maybe the plausibility of a made-up movie like Oil and Darkness comes not because AI is so good, but because the film industry is so bad at coming up with original ideas. In a way, when you ask an AI to make you a movie, it’s just mimicking the formulaic process by which many Hollywood blockbusters get made: Look around, see what’s been successful, lift elements of it (actors, directors, plot structures) and mash them together into a shape that looks new but actually isn’t.

But to be honest, old-fashioned human-generated fluent bullshit—weaponized by social media—has already been pretty disastrous. In the UK, to pick just one example, a cadre of fluent bullshitters drove the country out of Europe and directly off a cliff. (“ChatGPT, write a speech about why Britain should leave the EU but fill it with arcane vocabulary and Shakespearean references.”) Post-truth, fluency is everything and bullshit is everywhere, so of course ChatGPT’s fluent bullshit feels plausible. It was bound to. It was trained on people

ChumAlumWorld

KOLOR // Artificial Industrialist

KOLOR Looks Forward to Building With You!

Sending

Log in with your credentials

Forgot your details?