fafnir wrote: ↑
Steersman wrote: ↑Mon Jan 24, 2022 1:27 pm
Just re-reading Chomsky's Failed States - more than a bit of cause to think that neither the US nor the UK are quite the unblemished exemplars they try to portray themselves as.
A book I read a while back ( I think The Great Delusion: Liberal Dreams and International Realities) made the case that the problem with Liberal Democracies is that they start to believe their own bullshit about their noble purpose. This provides a continual justification for war, since we can continually paint the enemy of the month as "the new Hitler". Hence we are quite warlike. It also means that we frequently mess up the realpolitik of balancing powers and so on by acting like pacifist when we should be rattling sabres and antagonizing powers we should be forming alliances with.
Looks interesting - I'll keep an eye out for it. Or you, as a whore with the glass eye once said to a customer as he was leaving ...
But that's sort of the problem with unexamined assumptions and untenable premises - they often entail or produce some very sticky consequences, some quite problematic "perverse incentives". Remember reading a book of historian Barbara Tuchman -
The Guns of August if I'm not mistaken - on the lead-up to the first world war. One thing that stuck out, if I remember correctly, was the description of the mobilizations on both sides - at one point, the penultimate one prior to open hostilities, was that the Germans realized that if they "stood down" then the Allies were in an ideal position - full mobilization - to attack them. So they were pretty much forced to attack.
Pretty much the premise and theme of movies such as
Fail Safe,
Dr. Strangelove, and even
Terminator.
And, as the last example suggests, offloading the decision making to computers is often a faustian bargain at best, a "penny-wise and pound-foolish" one. Remember reading of Elon Musk's prognostications on artificial intelligence which he seems to have something of a love-hate relationship with:
But when it comes to artificial intelligence, [Musk] sounds very different. Speaking at MIT in 2014, he called AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”
He reiterated those fears in an interview published Friday with Recode’s Kara Swisher, though with a little less apocalyptic rhetoric. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher. “I do think we need to be very careful about the advancement of AI.”
https://www.vox.com/future-perfect/2018 ... ind-openai
You and some others here with at least a bit of programming skills under your belts might enjoy this blog post by Stephen Wolfram - creator of the mathematics program
Mathematica and many related ones - on
Logic, Explainability and the Future of Understanding:
Logic is a foundation for many things. But what are the foundations of logic itself? ....
That’s the same kind of question that’s increasingly being asked about all sorts of computational systems, and all sorts of applications of machine learning and AI. Yes, we can see what happens. But can we understand it?
I think this is ultimately a deep question—that’s actually critical to the future of science and technology, and in fact to the future of our whole intellectual development. ....
The Nature of Explainability:
What does it mean to say that something is explainable? Basically it’s that humans can understand it.
So what does it take for humans to understand something? Well, somehow we have to be able to “wrap our brains around it”.
https://writings.stephenwolfram.com/201 ... rstanding/
Of maybe some related interest is his description of "a logic puzzle given by Lewis Carroll, that establishes (here with a 100-step proof) that babies cannot manage crocodiles". I'll take his word for it ...
https://blog.wolfram.com/2021/12/13/new ... ebra-logic
But the problem is generally that the logical consequences of various premises are often unfathomable, at least for we lesser mortals. But passing the buck, the steering wheel, to the "techno-geeks" is very often a cure worse than the disease - one is reminded of H.G. Wells'
The Time Machine:
A work of future history and speculative evolution, Time Machine is interpreted in modern times as a commentary on the increasing inequality and class divisions of Wells' era, which he projects as giving rise to two separate human species: the fair, childlike Eloi, and the savage, simian Morlocks, distant descendants of the contemporary upper and lower classes respectively.
The Morlocks being, if I'm not mistaken, the ones who tended the machines on which the Eloi crucially depended.
https://en.wikipedia.org/wiki/The_Time_Machine
Some "grim meat-hook realities" there, indeed - as novelist John D. MacDonald once put it.