Dem 51
image description
GOP 49
image description

Is There a Way to Keep AI from Causing Great Damage?

Last week we ran a series of 20 images and asked readers to determine which were real. The conclusion from analyzing the responses is that the fakes fooled people 38% of the time, even though everyone was told in advance that some of the images were fake. The consequences of being able to make fake images that look realistic and fool many people are enormous and could sway the 2024 election. Imagine a realistic photo of Joe Biden or Donald Trump on a stretcher being put into an ambulance with the caption: "Trump/Biden had a heart attack." A lot of people would believe the photo, no matter how many times it was denied later on. Many people would say: "You can't fool me. I saw the photo of it." Actually, you can fool them. Is there anything that can be done about this before it is too late?

Biden got Amazon, Google, Meta, Microsoft, and three other big players in the AI business to agree to some voluntary safeguards to try to make the problem less bad. One of the safeguards is to let other tech companies check out their software products before they are released. Exactly who would do the checking wasn't specified. Theoretically, the software would also be vetted for racial and gender bias. Another safeguard is some kind of watermarking that would allow people to determine if an image is fake. Of course, if the watermark is invisible and you have to feed the test image into some piece of software to get an answer, that is hardly worth much. A rule like: "Every AI-generated image must have a label 'This image was generated by a computer. It is not real.' in 12-point Helvetica bold in red on a white background in the lower right-hand corner of the image" would be a fine start, but that wasn't what was agreed to. Similarly, it would be nice if the byline on AI-generated texts was "This article was generated by a computer," but that wasn't in there either.

Whether a voluntary scheme like this can succeed remains to be seen, especially since making it easy for people to distinguish fakes from the real McCoy defeats the purpose of generating the fake in the first place. Companies want people to believe the fakes. That's why they made them in the first place. We doubt that this scheme will provide much protection and will probably soon fall apart.

Congress could step up and mandate such visible disclosures. Some companies would no doubt scream "FIRST AMENDMENT," but Congress has passed laws about other kinds of disclosures and they have held up in court. Products made in China have to be labeled "Made in China," even if the manufacturer would prefer not to. Most packaged foods have to list their ingredients, usually in the order from the largest amount on down. If the top two ingredients are water and sugar, they have to list them, like it or not. Packages of cigarettes have to have a warning that smoking isn't so good for you. The power of Congress to force companies to label their products is well established.

A completely different approach is to rely on tort law. If you have a swimming pool in your unfenced back yard and a local child falls into it and drowns, it's your fault. You should have had a big fence around your yard. Similarly, if a depressed teenager asks a chatbot for the three best ways to commit suicide and it suggests eating rat poison, shooting yourself, and jumping out of a 10th story window as three good options, the maker of the chatbot could be held liable. The possibility of being sued could well cause the makers of AI software to be careful. This is no doubt the reason that Adobe blocks many kinds of images, as we mentioned last week.

The problem with relying on tort law is proving that the chatbot's advice was the reason the teenager did it and absent the advice wouldn't have. Also, sometimes the damage is indirect, such as AI generating disinformation about elections and candidates. Probably relying only on the threat of lawsuits to keep AI companies honest isn't going to work all by itself, but it could also have some value in addition to legal regulation. (V)

This item appeared on Read it Monday through Friday for political and election news, Saturday for answers to reader's questions, and Sunday for letters from readers.                     State polls                     All Senate candidates