St. Paul man killed Maplewood woman, shot self, then live-streamed an apology, charges say

posted in: All news | 0

A St. Paul man who authorities say shot and killed a Maplewood woman and then shot himself in the head later went live on social media to apologize, according to charges filed Friday.

The Ramsey County Attorney’s Office charged Joseph Raymond Wiggins, 57, with one count of second-degree murder in connection with the shooting death Wednesday of Amy Doverspike, 55, at her home in Maplewood.

The criminal complaint gave the following details:

At around 10:45 a.m. Wednesday, police responded to reports of a shooting at an apartment complex at 2565 Ivy Ave. E. in Maplewood. Staff at the building told authorities they heard two gunshots and then, after a pause, a third shot.

When officers arrived they smelled gunpowder and saw smoke in the air and found Doverspike lying in a pool of blood outside an apartment door next to two spent shell casings. She did not have a pulse and had gunshot wounds to her upper right thigh and her left shoulder and back.

Police attempted life saving efforts on scene. She was taken to Regions Hospital where she later was pronounced dead.

According to Maplewood police, authorities were told that Wiggins was inside the apartment and armed. Officers evacuated nearby apartments and issued a shelter-in-place order. Numerous law enforcement agencies responded including the Ramsey County Sheriff’s Office SWAT team, which made entry into the apartment and found Wiggins wounded from an apparent self-inflicted gunshot wound.

According to the criminal complaint, staff at the apartment building said Wiggins and Doverspike had a child who was in prison for murder and that Wiggins wanted to get back together with her. They said Doverspike did not want to be with him and Wiggins had threatened to end his own life the day before the shooting.

Wiggins’ current wife had called police to say she believed he had shot someone else and himself. She later provided authorities with texts he had sent her asking God to forgive him and saying he was scared and dying.

‘I killed her. I’m dying. I can’t get up,” he texted her, according to the criminal complaint. He also texted her two pictures of his face with the gunshot wound.

The criminal complaint said he also made a live Facebook video after he shot himself: “It appeared Wiggins’ jaw was broken and half his teeth were missing,” the complaint said. “(His) injury made it difficult to understand what he said in the video besides that he was sorry and his injury hurt.”

Wiggins was given first aid and taken to Regions Hospital, where he remained as of Friday in critical condition. He was expected to survive with an anticipated lengthy hospital stay.

Related Articles


Student injured, man dead in shooting outside Minnesota high school


Gov. Walz appoints fraud director, announces program integrity push


Sherrone Moore charged with stalking, home invasion after being fired as Michigan football coach


House Democrats release photos of Trump, Clinton and Andrew from Epstein’s estate


Ex-Oakdale officer found guilty of misconduct, not guilty of harassment for calls to surveillance subject

Navy investigation finds Osprey safety issues were allowed to grow for years

posted in: All news | 0

By KONSTANTIN TOROPIN

WASHINGTON (AP) — After a spate of deadly accidents that have claimed the lives of 20 service members in the past four years, a Navy report acknowledges that the military failed to address a growing series of issues with the V-22 Osprey aircraft since it took flight almost 20 years ago.

Related Articles


TSA renews push to end collective bargaining agreement for airport security screeners


Trump was unable to insult his way to victory in Indiana redistricting battle


What to know about Trump’s draft proposal to curtail state AI regulations


Critically wounded National Guard member being moved to in-patient rehabilitation


Speaker Johnson scrambles to find a health care plan as Republicans remain divided

“The cumulative risk posture of the V-22 platform has been growing since initial fielding,” according to the report by Naval Air Systems Command released Friday. It added that the office in charge of the aircraft “has not promptly implemented … fixes to mitigate existing risks.”

“As a result, risks continue to accumulate,” the report said.

The Associated Press reported last year that the most serious types of accidents for the Osprey, which is the only aircraft to fly like a plane but convert to land like a helicopter, spiked between 2019 and 2023 and that, unlike other aircraft, the problems did not level off as the years passed.

“As the first and only military tiltrotor aircraft, it remains the most aero-mechanically complex aircraft in service and continues to face unresolved legacy material, safety, and technical challenges,” the report said.

Commissioned in 2023 by NAVAIR, the Navy command responsible for the purchase and maintenance of aircraft, the investigation reveals that the Osprey not only has the “second highest number of catastrophic risks across all Naval Aviation platforms” but that those risks have gone unresolved for an average of more than 10 years.

By contrast, the average across other aircraft in the Navy’s inventory is six years.

The Navy’s response

Vice Adm. John Dougherty, commander of NAVAIR, said the service is “committed to improving the V-22’s performance and safeguarding the warfighters who rely on this platform.” He offered no details on any actions taken for years of failing to address the Osprey’s risks.

The command did not respond to questions about what, if any, accountability measures were taken in response to the findings.

The lack of details on accountability for missteps also came up when the Navy recently released investigations into four accidents during a U.S.-led campaign against Yemen’s Houthi rebels. A senior Navy official, who spoke to reporters on the condition of anonymity to offer more candid details, said that he didn’t believe the service had an obligation to make accountability actions public.

Risks were allowed to build up, the report says

The investigation lays much of the responsibility for the problems on the Osprey’s Joint Program Office. Part of the mission for this office, which operates within NAVAIR, is making sure the aircraft can be safely flown by the Marine Corps, the Navy and the Air Force, all of which use different versions of the aircraft for different missions.

The report found that this office “did not effectively manage or address identified risks in a timely manner, allowing them to accumulate,” and it faced “challenges” in implementing safety fixes across all three services.

Two major issues involve the Osprey’s complicated transmission. The aircraft has a host of gearboxes and clutches that, like a car’s transmission, are crucial to powering each propeller behind the Osprey’s unique tilting capability. The system also helps connect the two sides of the aircraft to keep it flying in the event of engine failure.

One problem is an issue in which the transmission system essentially shreds itself from the inside due to a power imbalance in the engines. That brought down a Marine Corps Osprey, killing five Marines in California in 2022.

The other issue is a manufacturing defect in the gears within the transmission that renders them more brittle and prone to failure. That was behind the crash of an Air Force Osprey off the coast of Japan in November 2023 that killed eight service members.

The report reveals that this manufacturing issue went back to 2006 but the Osprey’s Joint Program Office did not formally assess or accept this risk until March 2024.

Besides these mechanical issues, the report found that the program office failed to ensure uniform maintenance standards for the aircraft, while determining that 81% of all the accidents that the Ospreys have had on the ground were due to human error.

Recommendations for the issues revealed

The report offers a series of recommendations for each of the issues it uncovered. They range from rudimentary suggestions like consolidating best maintenance practices across all the services to more systemic fixes like developing a new, midlife upgrade program for the Osprey.

While fixes for both mechanical issues are also in the report, it seems that it will take until 2034 and 2033 for the military to fully deal with both, respectively.

Naval Air Systems Command did not reply when asked if it had a message for troops who will fly in the aircraft in the meantime.

Watchdog also releases Osprey report

The Government Accountability Office, an independent watchdog serving Congress, made similar conclusions and recommendations in a separate report released Friday.

The GAO blamed most Osprey accidents on part failures and human error while service members flew or maintained the aircraft. It determined that the military hasn’t fully “identified, analyzed, or responded” to all of the Osprey’s safety risks.

The GAO said the Pentagon should improve its process for addressing those risks, while adding more oversight to ensure they are resolved. Another recommendation is for the Navy, Air Force and Marines to routinely share information on hazards and accidents to help prevent mishaps.

Associated Press writer Ben Finley contributed to this report.

Instacart is charging different prices to different customers in a dangerous AI experiment, report says

posted in: All news | 0

By Caroline Petrow-Cohen, Los Angeles Times

The grocery delivery service Instacart is using artificial intelligence to experiment with prices and charge some shoppers more than others for the same items, a new study found.

The study from nonprofits Groundwork Collaborative and Consumer Reports followed more than 400 shoppers in four cities and found that Instacart sometimes offered as many as five different sales prices for the exact same item, at the same store and on the same day.

The average difference between the highest price and lowest price on the same item was 13%, but some participants in the study saw prices that were 23% higher than those offered to other shoppers.

The varying prices are unfair to consumers and exacerbate a grocery affordability crisis that regular Americans are already struggling to cope with, said Lindsey Owens, executive director of Groundwork Collaborative.

“In my own view, Instacart should close the lab,” Owens said. “American grocery shoppers aren’t guinea pigs, and they should be able to expect a fair price when they’re shopping.”

The study found that an individual shopper on Instacart could theoretically spend as much as $1,200 more on groceries in one year if they had to deal with the kind of price differences observed in the pricing experiments.

At a Safeway supermarket in Washington, D.C., a dozen Lucerne eggs sold for $3.99, $4.28, $4.59, $4.69, and $4.79 on Instacart, depending on the shopper, the study showed.

At a Safeway in Seattle, a box of 10 Clif Chocolate Chip Energy bars sold for $19.43, $19.99, and $21.99 on Instacart.

Instacart likely began experimenting with prices in 2022, when the platform acquired the artificial intelligence company Eversight. Instacart now advertises Eversight’s pricing software to its retail partners, claiming that the price experimentation is negligible to consumers but could increase store revenue by up to 3%.

“These limited, short-term, and randomized tests help retail partners learn what matters most to consumers and how to keep essential items affordable,” an Instacart spokesperson said in a statement to The Los Angeles Times. “The tests are never based on personal or behavioral characteristics.”

Instacart said the price changes are not the result of dynamic pricing, like that used for airline tickets and ride-hailing, because the prices never change in real time.

But the Groundwork Collaborative study found that nearly three-quarters of grocery items bought at the same time and from the same store had varying price tags.

The artificial intelligence software helps Instacart and grocers “determine exactly how much you’re willing to pay, adding up to a lot more profits for them and a much higher annual grocery bill for you,” Owens said.

The study focused on 437 shoppers in-store and online in North Canton, Ohio; Saint Paul, Minnesota; Washington, D.C., and Seattle.

©2025 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.

What to know about Trump’s draft proposal to curtail state AI regulations

posted in: All news | 0

By JESSE BEDAYN

President Donald Trump is considering pressuring states to stop regulating artificial intelligence in a draft executive order obtained Thursday by The Associated Press, as some in Congress also consider whether to temporarily block states from regulating AI.

Related Articles


Critically wounded National Guard member being moved to in-patient rehabilitation


Speaker Johnson scrambles to find a health care plan as Republicans remain divided


Justice Department sues 4 more states for access to detailed voter data


Belarus leader hosts US envoy as he seeks to improve his country’s ties with the West


Trump administration delays decision on federal protections for monarch butterflies

Trump and some Republicans argue that the limited regulations already enacted by states, and others that might follow, will dampen innovation and growth for the technology.

Critics from both political parties — as well as civil liberties and consumer rights groups — worry that banning state regulation would amount to a favor for big AI companies who enjoy little to no oversight.

While the draft executive order could change, here’s what to know about states’ AI regulations and what Trump is proposing.

What state-level regulations exist and why

Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.

Those laws include limiting the collection of certain personal information and requiring more transparency from companies.

The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.

“It’s not a matter of AI makes mistakes and humans never do,” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC.

“With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” she continued. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question.”

States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.

Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.

What Trump and some Republicans want to do

The draft executive order would direct federal agencies to identify burdensome state AI regulations and pressure states to not enact them, including by withholding federal funding or challenging the state laws in court.

It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.

Trump’s argument is that the patchwork of regulations across 50 states impedes AI companies’ growth, and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”

The draft executive order that was leaked could change and should not be taken as final, said a senior Trump administration official who requested anonymity to describe internal White House discussions.

The official said the tentative plan is for Trump to sign the order Friday.

Separately, House Republican leadership is already discussing a proposal to temporarily block states from regulating AI, the chamber’s majority leader, Steve Scalise, told Punchbowl News this week.

It’s yet unclear what that proposal would look like, or which AI regulations it would override.

TechNet, which advocates for tech companies including Google and Amazon, has previously argued that pausing state regulations would benefit smaller AI companies still getting on their feet and allow time for lawmakers develop a country-wide regulatory framework that “balances innovation with accountability.”

Why attempts at federal regulation have failed

Some Republicans in Congress have previously tried and failed to ban states from regulating AI.

Part of the challenge is that opposition is coming from their party’s own ranks.

Florida’s Republican governor, Ron DeSantis, said a federal law barring state regulation of AI was “Not acceptable” in a post on X this week.

DeSantis argued that the move would be a “subsidy to Big Tech” and would stop states from protecting against a list of things, including “predatory applications that target children” and “online censorship of political speech.”

A federal ban on states regulating AI is also unpopular, said Cody Venzke, senior policy council at the ACLU’s National Political Advocacy Department.

“The American people do not want AI to be discriminatory, to be unsafe, to be hallucinatory,” he said. “So I don’t think anyone is interested in winning the AI race if it means AI that is not trustworthy.”