MARCH 10 , TUESDAY

ARTIFICIAL INTELLIGENCE

Today I asked Claude, Anthropic’s form of a free artificial intelligence app three questions that I thought were difficult. The Trump administration so-called Department of War has a huge contract with Anthropic, which apparently was used to help target bombing opportunities in Iran, but wants to use Claude for things that Anthropic believes are immoral including autonomous firing of weapons in which Claude both chooses the targets and fires shots without any human being involved. Anthropic has code which prevents the military from doing this.

So I thought I would check out Claude to see how smart it is, or maybe to see how smart I am. The first question I asked was what I should do in order to sort and and organize the half million digital photographs I have stored in three places: about 400 CDs and DVDs, on about twenty portable hard drives and on the Apple cloud. The answer that I got back in about a minute was a four page, well written and organized answer, that sounded to me as exactly what I would have to do including costs.

A second question was to ask why so many white Americans wanted to deport immigrants. I expected this to be difficult for Claude since half of Americans rejects immigrants and half don’t. Which side would Claude come down on. But it turned out that Claude was both very moderate and very thoughtful and nuanced in its response, explaining the factors that made some white people reject immigrants and some not. It gave about five reasons people reject immigrants with caveats about each of them. Again I got a four page, well reasoned and thoughtful answer written and organized very well.

In both of these responses I didn’t detect any errors or hallucinations or anything that made me suspect that this was a computer and not a very well educated and thoughtful person giving me a response. I can see why some people come to think of an AI program as a trusted friend.

Leave a comment