Posts tagged stochastic parrots

Something weird is happening with LLMs and chess

chess, LLM, ai hype, stochastic parrots

Something weird is happening with LLMs and chess

There are lots of people on the internet who have tried to get LLMs to play chess. The history seems to go something like this:

  • Before September 2023: Wow, recent LLMs can sort of play chess! They fall apart after the early game, but they can do something! Amazing!
  • September-October 2023: Wow! LLMs can now play chess at an advanced amateur level! Amazing!
  • (Year of silence.)
  • Recently: Wow, recent LLMs can sort of play chess! They fall apart after the early game, but they can do something! Amazing!

I can only assume that lots of other people are experimenting with recent models, getting terrible results, and then mostly not saying anything. I haven’t seen anyone say explicitly that only gpt-3.5-turbo-instruct is good at chess. No other LLM is remotely close.

via https://dynomight.net/chess/

untitled 760937656665374720

stochastic parrots, LLM, critical thinking, scai

oojamaflip-whatchamacallit:

voyaging-too:

thehungrycity:

Making students proofread an AI essay is a brilliant teaching moment, it gives students a first-hand understanding of what current LLMs can and cannot do, and especially of the things they look like they can do but absolutely cannot do.

start ID

A thread of tweets by C.W. Howell (@cwhowell123), paragraph breaks added based on where tweets are cut off onto the next one.

So I followed @/GaryMarcus’s suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to “grade” it–look for hallucinated info and critique its analysis *All 63* essays had

hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterised. Every single assignment. I was stunned–I figured the rate would be high, but not that high.

The biggest takeaway from this was that the students all learned that it isn’t fully reliable. Before doing it, many of them were under the impression it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them

were unaware it could do this. All of them expressed fears and concerns about mental atrophy and the possibility for misinformation/fake news. One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken. One other student

opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, “I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is.”

end ID