Let's say you're on probation and you get caught with a little pot.comhcinc wrote: ↑I guess my issue is let's say that happens there is an AI that becomes very smart and "evil" how does it kill everyone? I mean let's say the Google search engine (which does use a form of AI) becomes evil. What's it going to do? Give me the wrong porn results?d4m10n wrote: ↑John D wrote: ↑Wed Jun 13, 2018 4:55 amHarris is being strangely reactionary regarding AI. There is nothing magically risky with AI or any other kind of information style technology. Harris looks like a Luddite try to destroy machinery by tossing a wrench in the works. Why is AI so special that it is more risky that any technology... steam trains... autos... computers... TVs... radios... the interblogs. People are not that good at reliably predicting the future because we don't really know how these new inventions will affect us. In my opinion, worrying about the future, and pretending you can make changes now to combat some future unknowable risks is a waste of energy.We have a big problem if we develop any general intelligence which has the ability to incrementally improve upon its own intelligence (or the intelligence of subsequent iterations) at great speed, but which has goals even slightly out of alignment with our own. Suppose, for example, that the first self-improving AI somehow ends up with a set of terminal values drawn from the Quran. We'd all be living in Dar al-Islam within less than a generation.Keating wrote: ↑There's some truth here, particularly given that even if the US doesn't develop AI, China almost certainly will. I think AI is much further off than some people worry, but I do think it is something to worry about. It's special because we're playing with what makes humans special. The risk, as I see it, comes from the perception the AI has. I can't remember who said it, but someone pointed out that even if we could communicate with ants, we'd have nothing to discuss; we're just too different. If the AI we develop is more intelligent than us, but it can't understand us, we have a big problem.
Your legal advice is automated, as is the adjudication, but there is still a human jury. Machine learning has become adept at interpreting law, from thousands and thousands of prior cases. There is also a probabation database that predicts your likelihood or reoffending, versus cost to the state, thus building up a fairly accurate picture of what you're likely to do, and the best punishment.
The problem is the system has already inherited the bias in those decisions, without even knowing. Worse still, the extent of this system is so great even your appeal is automated, and weighed similarly. The recommended judgement would be, in other words, perfectly 'fair'. But what if the system decided the most efficient punishment along cost/benefit is a life of hard labour? Or, to use an extreme idea, death?
We assign objectivity to machines that they don't actually have. That's the danger of AI.