untitled 760937656665374720

stochastic parrots, LLM, critical thinking, scai

oojamaflip-whatchamacallit:

voyaging-too:

thehungrycity:

Making students proofread an AI essay is a brilliant teaching moment, it gives students a first-hand understanding of what current LLMs can and cannot do, and especially of the things they look like they can do but absolutely cannot do.

start ID

A thread of tweets by C.W. Howell (@cwhowell123), paragraph breaks added based on where tweets are cut off onto the next one.

So I followed @/GaryMarcus’s suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to “grade” it–look for hallucinated info and critique its analysis *All 63* essays had

hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterised. Every single assignment. I was stunned–I figured the rate would be high, but not that high.

The biggest takeaway from this was that the students all learned that it isn’t fully reliable. Before doing it, many of them were under the impression it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them

were unaware it could do this. All of them expressed fears and concerns about mental atrophy and the possibility for misinformation/fake news. One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken. One other student

opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, “I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is.”

end ID