BTC/USD Rockets to Highest Print Since June: Sally Ho's Technical Analysis 27 July 2020 BTC

2mon
cryptodaily

Exploring Decentraland: A Review of the Virtual World Built on Ethereum

2mon
thebitcoinnews

J. A. McDonald: The Longest Running Case of Mass Hysteria

2mon
thebitcoinnews

Popular Indian Youtube Channel Hacked to Promote Bitcoin Giveaway Scam

2mon
thebitcoinnews

Tips on How to Minimize the Risks When Trading Cryptocurrencies

2mon
thebitcoinnews

Analyst Who Predicted Bitcoin’s V-Shaped Reversal Expects Rally to $17k

2mon
bitcoinist

Charles Edwards: BTC Could Shoot Up with Banks’ Help

2mon
livebitcoinnews

These 3 Metrics Suggest Bitcoin Is Building Up Bullish Momentum

2mon
bitcoinist

Are Institutional Investors Not Buying Bitcoin Anymore?

2mon
coininsider

Visa Moving to Integrate With Digital Currency Platforms

2mon
thebitcoinnews

Bakkt Hasn’t Experienced Any BTC Options Trading Since Mid-June

2mon
livebitcoinnews

XRP Is Surging But Not Everyone Is Convinced a Bull Run Is Certain

2mon
bitcoinist

Bitcoin Is Readying for “Big Move,” Says Fund Manager Who Predicts $50k BTC

2mon
bitcoinist

Market Update: Crypto Cap Nears $300 Billion, BTC Hits $10K, ETH Rallies Hard

2mon
thebitcoinnews

ETH/USD Launches to Highest Print Since July 2019: Sally Ho's Technical Analysis 26 July 2020 ETH

2mon
cryptodaily

After Surging 25% in 5 Days, Ethereum Forms Sign Last Seen Before March Dump

2mon
bitcoinist

Visa Claims It Will Support More Stable Currencies in the Future

2mon
livebitcoinnews

Analyst Who Predicted BItcoin’s 2018 Bottom Thinks $10k Is Imminent

2mon
bitcoinist

Adam Back & Satoshi Nakamoto: friends or strangers?

2mon
cryptodaily

Bitcoin Falls Back Into The $9K Zone After Its Brief $10K Breach

2mon
coingape

Bitcoin and Ethereum Surge to New Highs, Gold Almost at All-Time High

2mon
cryptoglobe

After Surging 15% in 5 Days, XRP Prints Textbook Sell Signal

2mon
thebitcoinnews

Bitcoin SV Is Stuck Below $200, May Resume Upside Range Correction

2mon
coinidol

Here’s Why Bitcoin is Poised to Hit $11,500 Following Overnight Rally

2mon
bitcoinist

Former CFTC commissioner believes the US should bring in outside stakeholders for a CBDC to bring in a new perspective

2mon
cryptodaily


AI algorithms intended to root out welfare fraud often end up punishing the poor instead

Similar News At:
cryptonewmedia

President Donald Trump recently suggested there is “tremendous fraud” in government welfare programs.

Although there’s very little evidence to back up his claim, he’s hardly the first politician – conservative or liberal – to vow to crack down on fraud and waste in America’s social safety net.

States – which are charged with distributing and overseeing many federally funded benefits – are taking these fraud accusations seriously. They are increasingly turning to artificial intelligence and other automated systems to determine benefits eligibility and ferret out fraud in a variety of benefits programs, from food stamps and Medicaid to unemployment insurance.

Of course, government agencies should ensure that taxpayer dollars are spent effectively. The problem is these automated decision-making systems are sometimes rife with errors and designed in ways that punish the poor for being poor, leading to tragic results.

As a clinical law professor who has researched safety net programs and has represented low-income clients in public benefits cases for over 20 years, I believe it’s essential these systems are designed in ways that are fair, transparent and accountable to prevent hurting society’s most vulnerable.

First, it’s important to make one thing clear: The evidence suggests incidents of user fraud in government welfare programs are rare.

For instance, the food stamp program, formally called the Supplemental Nutrition Assistance Program, currently serves about 40 million people monthly at an annual cost of US$68 billion. Despite regular denigration of food stamp recipients, less than 1% of benefits go to ineligible households, according to the federal government.

And, of those families, the majority of overpayments result from mistakes by recipients, state workers or computer programmers as they navigate complex regulatory requirements – not any intent to defraud the system.

As for Medicaid, which provides health insurance for low-income people, research has shown that the bulk of fraudulent activity is committed by health care providers – not by the 64 million needy people that use the program.

Within unemployment insurance, the “improper payment” rate for 2019 is 10.6%, which includes payments that should not have been made or that were made in an incorrect amount, but intentional fraud estimates are much lower.

Nonetheless, many states seem to be adopting systems that assume criminal intent on the part of the needy.

Many states have begun using “sophisticated data mining” techniques to identify fraud in the food stamp program, according to the General Accountability Office. Another report identified 20 states using AI tools in unemployment insurance. And the federal government is providing support to state Medicaid programs to upgrade their decades-old technology with more advanced software.

These types of automated decision-making systems rely on algorithms, or mathematical instructions. Some algorithms use machine learning – a form of artificial intelligence – to replace decisions that would otherwise be made by humans. They analyze large sets of data to recognize patterns or make predictions.

But officials should approach these systems with caution. The results for low-income families with little margin for error can be disastrous.

For instance, in Michigan, a $47 million automated fraud detection system adopted in 2013 made roughly 48,000 fraud accusations against unemployment insurance recipients – a five-fold increase from the prior system. Without any human intervention, the state demanded repayments plus interest and civil penalties of four times the alleged amount owed.

To collect the repayments – some as high as $187,000 – the state garnished wages, levied bank accounts and intercepted tax refunds. The financial stress on the accused resulted in evictions, divorces, destroyed credit scores, homelessness, bankruptcies and even suicide.

As it turns out, a state review later determined that 93% of the fraud determinations were wrong.

How could a computer system fail so badly? The computer was programmed to detect fraud when claimants’ information conflicted with other federal, state and employer records. However, it did not distinguish between fraud and innocent mistakes, it was fed incomplete data, and the computer-generated notices were designed to make people inadvertently admit to fraud.

Michigan is not an outlier. Program-wide algorithmic errors have similarly plagued Medicaid eligibility determinations in states such as Indiana, Arkansas, Idaho and Oregon.

And the issue isn’t just an American one. Many countries such as Australia and the U.K. are embracing these types of systems and encountering similar problems. The United Nations special rapporteur on extreme poverty and human rights issued a report in October that warned governments across the world to “avoid stumbling zombie-like into a digital welfare dystopia” as they automate their social welfare systems.

In a closely watched decision, a court in the Netherlands recently halted a welfare fraud detection system, ruling that it violates human rights. The decision is likely to bring closer scrutiny to these systems worldwide, although Americans have fewer legal protections than their European counterparts.

AI won’t magically root out what little fraud there is from the welfare rolls.

Mistakes can happen when software developers translate complex regulatory requirements into code and when they make programming errors. The massive sets of data fed into automated systems inevitably will contain some inaccuracies and omissions. And algorithms can also replicate embedded societal biases and end up discriminating against marginalized groups.

Without a human in the decision-making loop, these mistakes become compounded as they flow through multiple data-sharing systems.

To avoid these problems, state and other governments should ensure the systems they install are transparent in how they function, are accountable for mistakes and don’t incentivize private contractors hired to design them to kick people off the rolls to make more money. States should also make sure representatives from all groups affected are involved in their creation and monitoring.

In my research and legal work, I have found automated fraud detection is too often built on the assumptions that computers are magic and fraud among the poor is endemic. State officials should flip those assumptions and make computers work for the people rather than against them.

[You’re too busy to read everything. We get it. That’s why we’ve got a weekly newsletter. Sign up for good Sunday reading. ]

Professor Gilman’s law clinic has represented individuals affected by automated decision making in public benefits programs.

Regarding any copyrights issue, please contact us:content@hashbee.com.

0 comments