Welcome to the Pondera FraudCast, a weekly blog where we post information on fraud trends, lessons learned from client engagements, and observations from our investigators in the field. We hope you’ll check back often to stay current with our efforts to combat fraud, waste, and abuse in large government programs.
As a resident of California, I took personal interest in a recent bill introduced to the legislature that would create a drugged driving task force and the use of oral swabs to help identify drivers under the influence of drugs. Californians, after all, approved the use of recreational marijuana in last November’s elections.
This bill follows a recent study showing that drugged driving deaths have now passed drunk driving deaths, with a whopping 43% of fatalities in 2015 showing the use of a legal or illegal drug. This all makes me wonder just how lawmakers, law enforcement, and the courts are going to handle field sobriety tests in the future.
After all, the cheek swab test used to test for cannabis cannot test for alcohol and many other drugs. And we’ve written on this blog many times about the dangers of and rise in the use of opioids. With all these “choices”, it appears that officers may need to administer multiple tests (alcohol, opioids, cannabis, etc.) to identify potential influences affecting a driver.
While it would be nice to think that drivers would act responsibly, history shows us this is not the case. In Colorado, for example, CDOT conducted a study that revealed that 55% of marijuana users believed it was safe to drive under the influence of marijuana. And the number of fatalities with active THC has increased 250% from 2013 – 2015. While I know that this doesn’t necessarily prove causation, to me at least, it certainly provides reasons for concern.
A recent Cambridge University study revealed what many of us already know: each time we “like” a Facebook post, we are revealing something about ourselves. The results of the study were pretty jarring though as researchers found that Facebook “knows” their customers quite well with a just a small number of likes:
10 likes: as well as a colleague
70 likes: as well as a close friend
150 likes: as well as your parents
300 likes: as well as your spouse
This data can be used to predict gender, sexual orientation, political affiliations, and other important personal details. In fact, Facebook recently came under considerable criticism for research designed to identify psychological states of teenagers that could potentially be used for targeted advertising.
Analyzing social media data certainly presents opportunities for good, such as predicting and tracking influenza outbreaks. In many ways, it offers the digital version of predicting future behaviors, replacing anecdotal methods such as that of a friend of mine who claimed he could predict future prison riots by analyzing canteen purchases (inmates would stock up on supplies anticipating a future lockdown).
Regardless of how you feel about social media, it’s important to know that each time you press the enter key, you are revealing a little bit more about yourself – even to people you will never meet. This may not be a bad thing… but it is a thing.
By this time, just about everyone has watched or read a news report about the WannaCry ransomware attack that hit the world’s computer networks on May 12th. Multiple variants of the program will likely attack computers for the foreseeable future, forcing individuals to pay bitcoin ransom or lose their data and causing serious harm to businesses including hospitals and governments.
Plenty has been written about the source of the attack and how it works. So, while every “connected” person should read about WannaCry to help protect themselves against future attacks, I don’t see any need to cover this ground here. For me, though, two interesting facets of the story really stand out.
First, I find it fascinating and somewhat inspiring that the attack was stopped by a 22-year-old vacationing cyber analyst who goes by the name MalwareTech; with assistance from his colleague Kafeine. These two, and countless others, operate in a world that most of us know almost nothing about to keep our systems safe. It reminds me of the classic Jack Nicholson speech from “A Few Good Men” where he excoriates Tom Cruise for challenging him while he protects our safety. Of course, in this example, there is no evidence of MalwareTech or Kafeine “fragging” any of their tech colleagues.
The second interesting point I took form this attack was that most of us could have protected ourselves simply by updating our operating systems and virus protection software. This is a conversation I’ve had innumerable times with my own family. Of course, this also puts software manufacturers in the difficult position of patching years-old operating systems to accommodate those who won’t or can’t upgrade.
Bottom line for me: this is just another reminder to remain vigilant and to be thankful for the computer techs who have dedicated their careers to protecting us from those who have chosen to attack us. I hope you can “handle that truth”.
One of my favorite websites, paymentaccuracy.gov, has received a number of updates which may provide some insight into the current administration’s priorities. If you haven’t done so already, I encourage you to visit the site as it provides improper payment information on the government’s high-priority programs: those that report over $750 million of improper payments in a year or have not established or reported on their error rates.
The current version of the site includes many of the usual suspects including Medicaid ($36.3 billion in errors), Medicare fee-for-service ($41.1 billion), and the Earned Income Tax Credit ($16.8 billion with a whopping 24% error rate). SNAP continues to be listed but still does not provide relative numbers because of inaccurate state reporting—something we have discussed in previous posts.
Other items of note are the inclusion of three Veterans Affairs programs for Disability Compensation, Community Care, and Purchased Long Term Services and Support. While the .59% error rate on the $64 billion Disability Compensation plan appears surprisingly low, the 75.86% error rate for the $4.7 billion Community Care program is likely the result of new reporting requirements… at least I genuinely hope so.
Other high error-rate programs include school nutrition services (both breakfast and lunch), student loan programs, and Unemployment Insurance which ticked up to 11.65% this year.
Regardless of political leanings, I think we can all agree that we want our tax dollars going to those who need them the most. And the transparency provided by paymentaccuracy.gov is a great step toward this goal. My hope is that the government will continue to provide easy access to this information. I am still disappointed each time I visit the expectmore.gov website (which reports on program performance, not just fraud, waste, and abuse) where I see the following message:
“Expect More.gov was an initiative of the George W. Bush administration. This website has been archived and is posted here as an historical resource. It has not been updated since the end of 2008 and links to many external websites and some internal pages will not work.”
Last month CNN published a horrifying report on sexual abuse in America’s nursing homes and assisted living facilities. The report provided details on dozens of assaults, rapes, and other incidents that, quite frankly, were extremely difficult to read. In my opinion, however, this level of detail is probably necessary to shock people into taking action against what CNN rightly labelled “an unchecked epidemic”.
The numbers themselves are devastating. Approximately one million senior citizens are currently residing in 15,000 government-regulated long term care facilities. Since 2000, it appears that over 16,000 cases of sexual abuse have been reported, but the number is probably higher because of complex reporting systems and processes. And it’s impossible to determine the number of unreported cases.
Between 2013 – 2016, CNN found that 1,000 government-regulated facilities had been cited for mishandling or failing to prevent sexual assaults. 100 of the facilities had been cited numerous times. And despite this, only 226 facilities were fined just $9 million. Only 16 of the facilities were cut off from Medicaid and Medicare!
What is equally disturbing to the actual cases of abuse is the blatant disregard of safeguards and even the intentional impeding of investigations. Consider a case here in California where the employer allowed a nurse to continue working for weeks after reports of him kissing and fondling a female resident. This crime, by the way, resulted in only a $27,000 fine.
At Pondera, we often say that fraud and abuse is most prevalent at the intersection of large amounts of money and vulnerable populations. This makes nursing homes “ground zero” for abuse because it is here that the escalating costs of long term care combine with dementia and other health issues that can make senior citizens problematic witnesses.
Among several recommendations made by CNN was a call for improved reporting systems. We agree that this is an important piece of the solution. It will provide greater transparency and help regulators identify trends and clusters of abuse. But clearly, stricter oversite and enforcement are needed. So too is the type of no-nonsense reporting that CNN did for this report.
It’s April, which every year brings more news about tax fraud scandals. The news this year, however, is even more disturbing than expected. IBM’s X-Force threat intelligence group released a report last week that showed a 6,000% increase in spam emails designed to steal information from W-2s and other tax documents. Last year, these criminals “earned” over $3 billion through similar scams. And if you were one of the victims, then you are already familiar with the hassles of having your return stolen or a completely false one filed using your identity.
The continuing use of the Dark Web is a major factor behind the acceleration in this form of cybercrime. Stolen identities that include tax information are currently fetching around $40 on illicit marketplaces. While this may not seem like much, it is extremely lucrative when a fishing scam succeeds at stealing thousands of identities. So lucrative, in fact, that would-be scammers can even visit the Dark Web to buy online tutorials on how to perpetrate tax fraud.
Popular scams this year include sending emails that appear to be sent from TurboTax and other tax preparation companies. The hope is that you respond because you use that tax service. So-called spearfishing scams are also targeting corporate human resource departments. They will often send an email to an HR manager, seemingly from a member of the company’s executive staff, requesting W-2 and other tax information on the company’s employees.
Cybercriminals will continue to hone their skills resulting in more convincing emails and websites. They will continue to take advantage of technologies that allow them to increase the number of outbound messages. And they will continue to learn and share new techniques on the Dark Web. This means that all of us, as businesses and as private citizens, need to step up our efforts to protect data. These days, it’s no longer just “a fool and his money” who are soon parted.
At Pondera, we are often asked whether fraud detection algorithms will ever completely replace human investigators. And while I can’t address the “ever” part of the question, I can confidently state that it will not happen in the foreseeable future. One of the major reasons for this? Prediction models, like many people, struggle to distinguish between cause and effect.
A Stanford University professor recently shared her studies on this topic which support many of our own findings. She noted that while prediction algorithms are excellent at finding patterns in large data sets, their effectiveness is limited because they struggle with determining causation. An example she used is that algorithms have been shown to help identify patients who should not receive hip surgery because they would likely die of other causes. However, the algorithms are unable to prioritize those patients who should receive the surgery.
In several cases, the professor notes that correlation can be as low as 50%. And she properly notes that while this may be fine in certain situations, governments simply cannot conduct such high-risk experiments with social welfare, economic policies, and other important matters. And unlike controlled environments, such as those that use placebos to test medications, the real world is simply too messy and unpredictable to control all factors.
This problem of causation identifies an important intersection between human reasoning and prediction algorithms. We believe that in complex, rapidly changing environments like fraud detection, effective detection systems combine the power of modern detection algorithms with experienced human reasoning.
By leveraging the individual strengths of both machine and human learning, we can analyze massive data sets and make sense of the findings. We regularly use the system to find the problem and ask the human experts to help explain the problem. This makes the results actionable, which ultimately is what our government partners require.
A recent arrest in New York City illustrates a common fraud method that Pondera has been talking about for years: falsifying an identity (of an individual or business) and using it across multiple states, or in this particular case, across multiple subsidy programs within a state.
In February of this year, the New York State Attorney announced the arrest of several individuals allegedly involved with a fraudulent medical supply company. The company’s owner operated under a false social security number and billed the State Medicaid system for an expensive nutritional formula required by patients with feeding tubes. In actuality, when they delivered the service at all, they dispensed lower-priced Pediasure to dramatically increase their profits—apparently ignoring the health consequences to the patient.
But, as is often the case with bad actors, they didn’t stop there. In addition to their fraudulently obtained Medicaid profits, the fraudsters also used their fake socials and claimed income of less than $800 per month in order to qualify for Welfare payments. This despite the fact their medical “business” incomes were over $180,000 per year. It would not surprise me to learn that these same people were operating in other subsidy programs or in neighboring states.
This is a disturbing, but somewhat logical, pattern that we see again and again. When someone goes to the trouble of creating a fake identity or business, they use it to generate as much income as possible. They “fly below the radar” of each individual program (or state) to avoid detection, but the fraud can be very lucrative in aggregate.
The obvious solution to this is increased cooperation and data sharing across programs within a state and across states. The federal government has made significant efforts to support data sharing including the List of Excluded Individuals and Entities (LEIE), the Death Master File, and the Prisoner Update Processing System (PUPS) which can help identify claims that are fraudulently made by ineligible, deceased, or incarcerated identities.
Our hope is that these efforts expand, including at the state level, where multiple agencies cooperate to identify cross-program fraud schemes. It is not enough to detect and then stop individual incidents of fraud. Many of these incidents are too small, when viewed as discrete occurrences, to warrant prosecution. Knowing this, enterprising fraudsters “sprinkle” their claims across multiple jurisdictions to avoid attention.
Unfortunately, as was the case in New York, even these smaller, distributed fraud efforts can have an impact on patient health. The good news is that New York detected and put an end to this incident. But we all know there are thousands of similar cases each year.
A few months ago, I wrote an article offering our support to the USDA Food and Nutrition Service (FNS) as it rolls out a new program offering online access to groceries for Supplemental Nutrition Assistance Program (SNAP) recipients. My main concern with the new initiative was that FNS cannot provide an accurate SNAP fraud rate because of unreliable data coming in from the states. And we all know that offering goods and services online presents even more opportunities for fraud.
Now Congress is asking FNS additional questions in a letter sent to them on February 8th. Outlining the lawmakers’ concerns, the letter points out that as many as 10% of retailers who accept SNAP EBT cards participate in illegal trafficking schemes. These schemes pay recipients a discounted amount of cash or unapproved grocery items in exchange for their cards. They go on to point out that total annual fraud in the program is over $858 million.
The massive size of the SNAP program is one of the major reasons, historically at least, it is so difficult to detect fraud. In 2016, the program distributed $67 billion in benefits to 44 million Americans through 260,000 authorized retailers. Interestingly though, as much as 85% of the retailer fraud is committed by small grocery and convenience stores, or even flea markets like the one in Opa-Locka, FL that we recently wrote about.
With the advent of cloud computing and advanced analytics solutions, FNS now has access to the tools required to make a real difference in their fight against fraud. And by addressing the retailer side of the equation, they will also find, through association, many of the fraudulent individuals in the system as well. It would certainly make sense for FNS to leverage modern fraud detection technologies at the same time that they offer online access to groceries.
It is also important to note that the number of SNAP program retailers and recipients, while large, is very manageable. Consider that at Pondera we’ve performed equally complex fraud analytics on Medicaid programs with as many as 200,000 providers and Unemployment Insurance systems with over 1,000,000 employers. And when one considers that the overwhelming majority of SNAP trafficking fraud occurs in a concentrated subsection of small and medium retailers, the problem becomes even more manageable.
I read with great interest the story this month about a woman who cheated her way to a second-place finish in the Fort Lauderdale half marathon. After posting a time of 1 hour and 21 minutes, the website www.marathoninvestigation.com revealed several problems with the woman’s results including: the race statistics she posted to a website were manually entered (versus those calculated by her GPS), a second set of results she posted seemed more consistent with a bike ride, and a zoomed photo of her post race wristwatch revealed that she ran only 11.65 miles of the 13.1 mile race. This evidence led to an admission and apology by the runner.
What I find interesting about this incident is how indicative it is of the ever-increasing power of data. While runners collect data to help them train and perform better, it can also be used to uncover cheating and fraud. This is no different in government subsidy programs, like Medicaid and welfare systems. Governments collect data to help them improve service delivery to their constituents, and with modern technologies, the data can also reveal fraudulent anomalies and patterns.
Of course, bad actors who want to defraud programs are aware of the increased use of data to catch them. Gone are the days when they can blatantly abuse government systems knowing that the size and complexity of the programs would make it nearly impossible to catch the cheats. In running, who would dare to repeat Rosie Ruiz’s 1980 Boston Marathon “victory” where she was spotted riding the subway with her runner’s bib?
Instead, bad actors often “fly under the radar” – stealing smaller amounts over longer periods of time to avoid being noticed. Second place in the Fort Lauderdale Marathon is certainly “under the radar” compared to a victory in the Boston Marathon.
So, now that our fraud detection capabilities can catch bad actors who boldly fly above the radar and those who strategically fly below the radar, one would hope that it would lead to decreases in fraud attempts. But I also know that making fraud harder to commit rarely turns fraudsters into honest and contributing members of society. It just makes them work harder. This simple fact provides us with the incentive to continually improve on our technologies and approaches. This is one war we fully intend to win.