US: Your AI has to explain its decisions

Cyber Security

No more turning a blind eye to algorithmic bias and discrimination if US lawmakers get their way

For years, tech has claimed that AI decisions are very hard to explain, but still pretty darn good. If US lawmakers get their way, that will have to change.

Citing potential for fraud and techno-fiddling to get the desired answers to support big business’s profit wishes, like denying loans, housing choices and the like, lawmakers are pairing with civic organizations to try to force the issue through the Algorithmic Accountability Act of 2022.

The idea that a black box – super high tech or otherwise – brings to bear a certain digital whimsy on the life-altering decisions meted to the fates of the masses seems a step too far. Especially, US senators argue, if it means troubling trends toward tech-driven discrimination.

If you’ve ever been denied a loan your first question is “why?” That’s especially tricky if banks don’t have to answer, besides offering “it’s very technical, not only you wouldn’t understand, but you can’t, and neither do we.”

This kind of non-answer buried in opaque techno-wizardry eventually had to pique questions about the machine learning environments’ decisions we now find oozing from every tech pore we confront in our digital lives.

As tech extends into law enforcement initiatives where mass surveillance cameras aim to slurp up facial images and pick out the bad guys, the day of reckoning had to come. Some cities, like San Francisco, Boston and Portland, are taking steps to ban facial recognition, but many others are all too happy to place orders for the tech. But in the realm of public safety, computers picking the wrong person and dispatching cops to scoop them up is problematic at best.

Here at ESET, we’ve long been integrating machine learning (ML; what others market as “AI”) with our malicious detection tech. We also opine that unfettered, ultimate decisions spouting from the models have to be kept in check with other human intelligence, feedback, and lots of experience. We just can’t trust the ML alone to do what’s best. It’s an amazing tool, but only a tool.

Early on we were criticized for not doing a rip-and-replace and letting the machines alone pick what’s malicious, in a marketing-driven craze to adopt autonomous robots that just “did security”. But accurate security is hard. Harder than the robots can manage unfettered, at least until true AI really does exist.

Now, in the public eye at least, unfettered ML gets its comeuppance. The robots need overlords who spot nefarious patterns and can be called to account, and lawmakers are getting steep pressure to make it so.

While the legal labyrinth defies both certain levels of explanation and the predictability of lawmaking success coming off the other end of the Washington conveyor belt, this sort of initiative spurs future related efforts at making tech accountable for its decisions, whether machines do the deciding or not. Even though the “right to an explanation” seems like a uniquely human demand, we all seem to be unique humans, devilishly hard to classify and rank with accuracy. The machines just might be wrong.

Products You May Like

Articles You May Like

Major Oilfield Supplier Hit by Ransomware Attack
FBI Seeks Public Help to Identify Chinese Hackers Behind Global Cyber Intrusions
Palo Alto Advises Securing PAN-OS Interface Amid Potential RCE Threat Concerns
Pro-Russian Hacktivists Target South Korea as North Korea Joins Ukraine War
Canada Orders Shutdown of Local TikTok Branch Over Security Concerns

Leave a Reply

Your email address will not be published. Required fields are marked *