Ben Sargent
To see more political cartoons from Ben Sargent, visit our Loon Star State section, or find Observer political reporting here.
You can display the Full post on your Blog Page
Ben Sargent
To see more political cartoons from Ben Sargent, visit our Loon Star State section, or find Observer political reporting here.
Amid the U.S.’s 20-year war against the Taliban, Afghans who feared retribution for aiding U.S. troops were promised refuge in America, as long as they met certain conditions. A deal recently struck in Congress issuing 12,000 more visas for these brave allies is a welcome down payment on that obligation. But the U.S.’s responsibility doesn’t end there.
Afghans who worked with the U.S. military go through a lengthy vetting process in order to receive special immigrant visas granting them and specified family members permanent residence in the U.S. More than 80,000 candidates were in the visa pipeline as of March 1, a quarter of whom had been cleared for final vetting. Before the recent compromise, the program was set to run out of visas by August.
While officials could theoretically have continued judging applications after that and used other tools, such as humanitarian parole, to bring Afghans to America, there would inevitably have been pressure to scale back efforts. Thousands of Afghans could have been left stranded and subject to harassment by the Taliban, or worse.
Such an abdication of responsibility would not only endanger Afghans but also tarnish America’s global standing. The network of allies and partners the U.S. has built up across the globe is a key advantage against rivals such as Russia and China. Yet concerns about U.S. reliability persist — and they will only worsen if Washington reneges on its promises to Afghans. Few armies will be eager to stand alongside the U.S. if they, too, fear abandonment once the fighting stops.
The extra visas offer some relief, and the SIV program has also been extended for two more years. But they still fall far short of the need; last year, the Senate overwhelmingly supported language that would have authorized an additional 20,000 visas and kept the program going until the end of 2029.
Meanwhile, thousands of Afghans evacuated from Kabul after the U.S.’s 2021 withdrawal are living in the U.S. but don’t qualify for SIVs. Congress has repeatedly punted on a deal to create a pathway to legal residency for them.
Both parties in Congress should work to extend the SIV program further and at a minimum authorize additional visas for all those who qualify, as well as to offer a path to green cards for evacuees. The U.S. betrayed its Afghan partners once by leaving their country so ineptly. The least Congress can do is ensure they’re not forgotten.
— The Bloomberg Opinion editorial board
Lawrence K. Zelvin: Fraudsters have artificial intelligence too
Stefanos Geroulanos: Why would anyone want a paleo diet? We’re desperate for half-truths about human origins
David French: Israel is making the same mistake America made in Iraq
Palma Joy Strand: Unlocking the power of difference, on purpose
Lynn Schmidt: A ‘grave counterintelligence threat’ aided Trump in 2016. He wants him back.
Soon, personal artificial intelligence agents will streamline and automate processes that range from buying your groceries to selling your home. You’ll tell it what you want, and it will do the research and legwork, log into your personal accounts and execute transactions in milliseconds.
It is a technology with extraordinary potential, but also significant new dangers, including financial fraud. As Gail Ennis, the Social Security Administration’s inspector general, recently wrote: “Criminals will use AI to make fraudulent schemes easier and faster to execute, the deceptions more credible and realistic, and the fraud more profitable.”
The story of cyberfraud has always been a technological arms race to innovate faster between criminals and those they’re trying to rob. In banking, AI’s advent both supercharges that competition and raises its stakes.
When scammers used an AI-powered audio deepfake to convince the CEO of a British utility to transfer $243,000 to a Hungarian bank account in 2019, it was called “an unusual case” because it involved AI. That is not the case anymore.
Earlier this year, criminals made headlines when they used deepfake technology to pose as a multinational company’s chief financial officer and tricked one of the company’s employees in Hong Kong into paying the scammers $25 million.
Globally, 37% of businesses have experienced deepfake-audio fraud attempts, according to a 2022 survey by identity verification solutions firm Regular, while 29% have encountered video deepfakes. And that doesn’t include individuals who receive realistic-sounding calls purportedly from hospitalized or otherwise endangered family members seeking money.
As these AI-enabled fraud threats proliferate, financial institutions such as BMO, where I lead the financial crimes unit, are working to continually innovate and adapt to outpace and outsmart the criminals.
With an estimated annual tab of $8.8 billion in 2022, fraud was a festering problem even before the COVID-19 pandemic, which sparked a dramatic increase in online financial activity. According to TransUnion, instances of digital financial fraud increased by 80% globally from 2019 to 2022, and by 122% for U.S.-originating transactions. LexisNexis Risk Solutions calculated in 2022 that every dollar lost to fraud costs $4.36 in total as a result of associated expenses such as legal fees and the cost of recovering the stolen money.
Generative AI, by its very nature, doesn’t require high-tech skills to get benefits — a fact criminals are leveraging to find and exploit software and hardware vulnerabilities. They also use AI to improve the tailoring of their phishing attacks through enhanced social media and other publicly available information searches.
Then there’s synthetic fraud, one of the fastest-growing categories of cyberfraud, in which the AI fabricates identities from real and made-up details and uses them to open new credit accounts. In one instance, criminals created roughly 700 synthetic accounts to defraud a San Antonio bank of up to $25 million in COVID-19 relief funds. TransUnion last year estimated that synthetic account balances reached $4.6 billion in 2022 while a previous Socure report projected the cost of this fraud would reach$5 billion this year.
When it comes down to rolling out new technology solutions before security controls are well in place, we’ve been down this road before. For example, when businesses rushed headlong to embrace the transformative power of cloud computing, security was a bolt-on to which they paid attention only after suffering the sorts of massive data breaches that have become all too frequent, such as those suffered by Yahoo in 2013, in which the personal data of 3 billion people was exposed; Equifax in 2017, 147 million; and Marriott in 2018, 500 million.
As the international affairs think tank Carnegie Endowment noted in 2020, “Despite various efforts to contain these risks over the past 25 years, the costs of cyber-attacks continue to increase, not decrease, and most organizations — governments and companies — struggle to effectively protect themselves.”
The good news is that financial institutions are moving to combat AI fraud with the best tool available: AI. Nearly three-quarters of respondents to a 2022 Bank of England survey said that they were developing machine-learning models to fight financial fraud. Other next-generation defenses are also in the works: Passkeys are replacing passwords, and quantum key distribution is becoming more widespread.
It’s a good start, but it’s just that, a start.
Along with more and better technological and AI advances to protect information and funds, we actually need to lean back into the human element. Companies, financial institutions, regulators and consumers must collaborate to produce and adopt secure, resilient and robust controls for handling this threat. This means education — between institutions and consumers, and among families and friends. It means following protective online practices to keep access information secure. It means pulling all of the tools available — both online and off and at the government, organizational and individual levels to shore up our defenses like a shield.
The alternative — a patchwork series of solutions — will have exploitable seams. And the problem is going to roll downhill, hitting medium- and small-sized businesses and individuals the hardest as they won’t have multinational corporations’ ability to afford sophisticated defenses.
Artificial intelligence is speeding everything up. We cannot afford to let this accelerated clock tick too long without developing a global, industrywide security standard to harden us against the coming fraud storm.
If we don’t act, the money we already have lost to fraud will seem like small change.
Lawrence K. Zelvin is the head of the financial crimes unit at BMO. He wrote this column for the Chicago Tribune.
Stefanos Geroulanos: Why would anyone want a paleo diet? We’re desperate for half-truths about human origins
David French: Israel is making the same mistake America made in Iraq
Palma Joy Strand: Unlocking the power of difference, on purpose
Lynn Schmidt: A ‘grave counterintelligence threat’ aided Trump in 2016. He wants him back.
Tyler Cowen: How can countries make immigration work? Ask Canada
Can anyone offer a compelling, accurate story that explains the nature of humanity from its earliest origins? Scientist-intellectuals have tried to do so at least since the sci-fi author H.G. Wells struck gold in 1919 with “The Outline of History,” his influential attempt at telling “the whole story of man.” This effort endures as a trend. Jared Diamond, E.O. Wilson, Yuval Noah Harari, and David Graeber and David Wengrow are just some of the more recent authors to attempt grand accounts that veer from their professional training — and presume to explain human nature itself.
Offering as much entertainment as scientific truth, these books are often hits, met with breathless claims that they radically transform all we knew about humanity. Harari’s “Sapiens: A Brief History of Humankind” has sold some 25 million copies, counting Barack Obama and prominent Silicon Valley bros among his apostles. It argues that cognition and language produce beliefs that are shared across large groups to enable cooperation. The book imagines humanity as simply conceiving a picture and acting to make it true — like a developer designing and releasing an app — and it ignores individuals’ divergent problems, societies, thoughts and lives.
More recently, Graeber and Wengrow sought in “The Dawn of Everything” to take down the myths of endless progress (exemplified in Steven Pinker’s unshakeable belief in it ). Their book frames the anarchic lives of early human groups, who survived by moving among different kinds of social organization, as a celebration of biodiversity and flexibility against social hierarchies.
The question isn’t whether such tales are correct. Perhaps the clearest picture they produce is that our origins are complex enough that we can pick just about any guiding idea and find enough evidence to sustain it across the “human journey.” The real question is why we have such a gluttonous appetite for these stories. What makes us needy for half-truths in this genre?
Some of it has to do with the difficulty we face in understanding our world and answering basic questions about our existence: When was humanity born? What in particular made humans human? Were the early millennia a gruesome hellhole we were fortunate to escape, or a warm-hearted, halcyon era that we should strive to re-create? For a long time, our answers to these questions have used the past to guide our present and further contemporary agendas.
European and American archaeologists significantly shaped these efforts in the decade around Darwin’s “On the Origin of Species,” published in 1859. They got famous from the tools and skulls they dug out, and from the way they compared societies distant in space or time, concluding that humanity’s ancestors resembled today’s Indigenous peoples. By comparison, they thought themselves to be the pinnacle of human achievement. Then, after World War I, much of Europe went into a racial panic that colonial populations would rise up and overwhelm civilization. So they pushed a historical narrative that defined Indigenous peoples as “primitives” — vestiges of the oldest years of humanity.
But the real obsession with prehistory began after World War II, when the collapse of Nazi Germany breathed life into three conflicting social goals that influenced public intellectuals. The first, in the wake of the Holocaust, was the desire to end racism. Organizations like UNESCO, relying on anthropologists, sociologists and geneticists, published histories explicitly aimed at emphasizing human unity and equality. Intellectuals across Europe, the U.S. and the Soviet bloc agreed: Science had to be wielded against the racism that had dominated recent decades and lived on. By 1970, prehistory was being taught around the world, in Poland, Kenya and Texas. Time-Life published lushly illustrated books telling a common human story, while the BBC and other channels beamed shows such as ” The Ascent of Man” into living rooms.
The second goal was to explain violence. When Africa was recognized as the site of humanity’s origin, this came at a cost. Some scientists treated australopithecine ancestors — the evolutionary links between apes and humans — as savage creatures with an unslakable blood thirst. Robert Ardrey and other popular writers drew a straight line from that brutality to make the atom bomb almost inevitable. Humans were wired for destruction — violence supposedly hid inside everyone, just beneath a thin veneer of civilization. Neurologists and animal ethologists sought the bodily headwaters from which our natural aggression sprang forth.
Racism, supposedly banished, reared its head as the story circulated that humans’ innate propensity for violence survived in faraway peoples — far away from the West, that is. Explanations to justify existing social power structures multiplied: the “Man the Hunter” theory, for example, which proposed that early men led testosterone-filled hunts and war while the women (surprise!) did the gathering and hearth-keeping. Well-intentioned ethnographic films about “primitive warfare” made things worse. Audiences across the Global North could celebrate anew how far they had come from that life and view people of the Global South fighting for independence from colonialism as murderous, just like our human ancestors.
The third goal of the era’s fixation on prehistory was the most selfish. Every new science since the mid-century, including sociobiology and neuroscience, has claimed to answer the big questions about humanity’s meaning. No matter how controversial the research that supports them, such answers raise a field’s profile in the public sphere dramatically. And even after philosophers had discarded concepts like “human nature” as ill-defined and passé, the public had not.
Paleontologists usurped the place other intellectuals had left empty and became the go-to speakers for these ideas. The more religious conservatives fought against Darwin, the more TV stations gave airtime to scientists who offered answers for the fundamental questions of human existence. Audiences learned to link modern art to prehistory and inquire about Neanderthal DNA. The harder it was to come up with a strict definition of humanity, the more narratives about origins filled the void.
How much more do we know today about our ancient past? In terms of details, a lot more, such as the actual Neanderthal genome. But grand theories respond less to details than to other theories of the human experience that their proponents dislike.
Feminist archaeologists in the 1970s, for example, shaped their scientific findings to battle the sexist “Man the Hunter” story. By the 1990s, the notion that prehistoric life was harmonious was criticized as condescending. Diamond responded to these pressures in “The World Until Yesterday” by reveling in tales of Native brutality and discarding specific ethnographic explanations of that violence. Graeber and Wengrow, like political scientist James Scott, instead blamed early state formations (and modern European biases) for violence. Paleozoologists still wrestle with outdated conceptions of brutish Neanderthals. And so on: The deep past becomes a plaything for us to endlessly debate our own political ideas.
The steady stream of scientific discoveries reaching our screens makes us yearn ever more for a complete explanation. Yet the explanations we get will have little to do with the emergence of humanity itself. Meanwhile, we ignore the question we should be asking: Do we really share that much with cave-and-savannah-dwelling hominids from long ago? Or have we destroyed the Earth so irrevocably that we are desperate for our ancestors to tell us how to live?
Stefanos Geroulanos, the director of the Remarque Institute and a professor of history at New York University, is the author of the forthcoming “The Invention of Prehistory.” He wrote this column for the Los Angeles Times.
Lawrence K. Zelvin: Fraudsters have artificial intelligence too
David French: Israel is making the same mistake America made in Iraq
Palma Joy Strand: Unlocking the power of difference, on purpose
Lynn Schmidt: A ‘grave counterintelligence threat’ aided Trump in 2016. He wants him back.
Tyler Cowen: How can countries make immigration work? Ask Canada