Entries tagged ai | Hugonweb Annotated Link Bibliography

Top of the slops

https://www.humprog.org/%7Estephen/blog/highered/top-of-the-slops.html

In higher education, I'd argue our common goal, across all disciplines (we use that word for a reason) is to turn out humans who are less sloppy than they would otherwise be. At present LLMs stand to drive sloppiness upwards, because they are seen as ways to “relieve” humans of the tasks that deliver the real insights—the writing, programming and perhaps even proof through which we discover the ways in which what we previously thought was sloppy. When people don't go through that process of discovery, they don't get the skills to make them less sloppy. To become rigorous, you have to be confronted with your own sloppiness.

Giving university exams in the age of chatbots

https://ploum.net/2026-01-19-exam-with-chatbots.html

Interesting perspective on how software engineering students see chatbots. It seems almost all of them don't trust them when things get serious, but they may be biased by the instructor's AI scepticism.

Three Inverse Laws of Robotics

https://susam.net/inverse-laws-of-robotics.html

  • Humans must not anthropomorphise AI systems.
  • Humans must not blindly trust the output of AI systems.
  • Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.

How to use computing power faster: on the weird economics of semiconductors and GenAI

https://gauthierroussilhe.com/en/articles/how-to-use-computing-power-faster

The author posits that increasing sales of ever more powerful processors requires a market need for more powerful processors. During Microsoft's heyday, that was each version of Windows or Office requiring more computing power (the author thinks mostly because of bloat). The proliferation of smartphones, then crypto, then AI have continued the trend. So the semiconductor industry has become dependent on AI.

Unless a new fad emerges, given the supposed unprofitability of AI, the author sees a few future possibilities:

  1. AI companies become much more efficient, using much less computing power. This would lead to a bust in the semiconductor industry.
  2. AI companies can't become efficient enough to become profitable, leading to an AI bust that also brings a bust in the semiconductor industry.
  3. AI increasing efficiency just enough to become profitable while not harming the semiconductor industry too much. This is the "inference inefficiency optimum."

Bag of words, have mercy on us

https://www.experimental-history.com/p/bag-of-words-have-mercy-on-us

I think this approach of thinking of LLMs as "bags of words" or, I might say boxes of text, is appropriate. AIs aren't honest or dishonest, but the box of text may be biased in one way or another. To me, the Internet is biased toward a lot of BS and inaccuracy. That's what I expect from LLMs.

Since source texts of varying quality are all mixed up in hard to understand ways, it's hard for the user to evaluate the quality of the LLM output. People who don't understand that rely on an appeal to authority, even though an LLM is not necessarily a good authority on anything.

Anthropomorphising AI is only natural, but it's interesting how AI seeming to be smart makes people think of it as an authority or even godlike.

Generative AI runs on gambling addiction

https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/

That is: generative AI works the same way as gambling addiction. Spin the gacha! Just one more spin, bro, I can feel it!

Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect — which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours.