Instacart is charging different prices to different customers in a dangerous AI experiment, report says

posted in: All news | 0

By Caroline Petrow-Cohen, Los Angeles Times

The grocery delivery service Instacart is using artificial intelligence to experiment with prices and charge some shoppers more than others for the same items, a new study found.

The study from nonprofits Groundwork Collaborative and Consumer Reports followed more than 400 shoppers in four cities and found that Instacart sometimes offered as many as five different sales prices for the exact same item, at the same store and on the same day.

The average difference between the highest price and lowest price on the same item was 13%, but some participants in the study saw prices that were 23% higher than those offered to other shoppers.

The varying prices are unfair to consumers and exacerbate a grocery affordability crisis that regular Americans are already struggling to cope with, said Lindsey Owens, executive director of Groundwork Collaborative.

“In my own view, Instacart should close the lab,” Owens said. “American grocery shoppers aren’t guinea pigs, and they should be able to expect a fair price when they’re shopping.”

The study found that an individual shopper on Instacart could theoretically spend as much as $1,200 more on groceries in one year if they had to deal with the kind of price differences observed in the pricing experiments.

At a Safeway supermarket in Washington, D.C., a dozen Lucerne eggs sold for $3.99, $4.28, $4.59, $4.69, and $4.79 on Instacart, depending on the shopper, the study showed.

At a Safeway in Seattle, a box of 10 Clif Chocolate Chip Energy bars sold for $19.43, $19.99, and $21.99 on Instacart.

Instacart likely began experimenting with prices in 2022, when the platform acquired the artificial intelligence company Eversight. Instacart now advertises Eversight’s pricing software to its retail partners, claiming that the price experimentation is negligible to consumers but could increase store revenue by up to 3%.

“These limited, short-term, and randomized tests help retail partners learn what matters most to consumers and how to keep essential items affordable,” an Instacart spokesperson said in a statement to The Los Angeles Times. “The tests are never based on personal or behavioral characteristics.”

Instacart said the price changes are not the result of dynamic pricing, like that used for airline tickets and ride-hailing, because the prices never change in real time.

But the Groundwork Collaborative study found that nearly three-quarters of grocery items bought at the same time and from the same store had varying price tags.

The artificial intelligence software helps Instacart and grocers “determine exactly how much you’re willing to pay, adding up to a lot more profits for them and a much higher annual grocery bill for you,” Owens said.

The study focused on 437 shoppers in-store and online in North Canton, Ohio; Saint Paul, Minnesota; Washington, D.C., and Seattle.

©2025 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.

What to know about Trump’s draft proposal to curtail state AI regulations

posted in: All news | 0

By JESSE BEDAYN

President Donald Trump is considering pressuring states to stop regulating artificial intelligence in a draft executive order obtained Thursday by The Associated Press, as some in Congress also consider whether to temporarily block states from regulating AI.

Related Articles


Critically wounded National Guard member being moved to in-patient rehabilitation


Speaker Johnson scrambles to find a health care plan as Republicans remain divided


Justice Department sues 4 more states for access to detailed voter data


Belarus leader hosts US envoy as he seeks to improve his country’s ties with the West


Trump administration delays decision on federal protections for monarch butterflies

Trump and some Republicans argue that the limited regulations already enacted by states, and others that might follow, will dampen innovation and growth for the technology.

Critics from both political parties — as well as civil liberties and consumer rights groups — worry that banning state regulation would amount to a favor for big AI companies who enjoy little to no oversight.

While the draft executive order could change, here’s what to know about states’ AI regulations and what Trump is proposing.

What state-level regulations exist and why

Four states — Colorado, California, Utah and Texas — have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.

Those laws include limiting the collection of certain personal information and requiring more transparency from companies.

The laws are in response to AI that already pervades everyday life. The technology helps make consequential decisions for Americans, including who gets a job interview, an apartment lease, a home loan and even certain medical care. But research has shown that it can make mistakes in those decisions, including by prioritizing a particular gender or race.

“It’s not a matter of AI makes mistakes and humans never do,” said Calli Schroeder, director of the AI & Human Rights Program at the public interest group EPIC.

“With a human, I can say, ‘Hey, explain, how did you come to that conclusion, what factors did you consider?’” she continued. “With an AI, I can’t ask any of that, and I can’t find that out. And frankly, half the time the programmers of the AI couldn’t answer that question.”

States’ more ambitious AI regulation proposals require private companies to provide transparency and assess the possible risks of discrimination from their AI programs.

Beyond those more sweeping rules, many states have regulated parts of AI: barring the use of deepfakes in elections and to create nonconsensual porn, for example, or putting rules in place around the government’s own use of AI.

What Trump and some Republicans want to do

The draft executive order would direct federal agencies to identify burdensome state AI regulations and pressure states to not enact them, including by withholding federal funding or challenging the state laws in court.

It would also begin a process to develop a lighter-touch regulatory framework for the whole country that would override state AI laws.

Trump’s argument is that the patchwork of regulations across 50 states impedes AI companies’ growth, and allows China to catch up to the U.S. in the AI race. The president has also said state regulations are producing “Woke AI.”

The draft executive order that was leaked could change and should not be taken as final, said a senior Trump administration official who requested anonymity to describe internal White House discussions.

The official said the tentative plan is for Trump to sign the order Friday.

Separately, House Republican leadership is already discussing a proposal to temporarily block states from regulating AI, the chamber’s majority leader, Steve Scalise, told Punchbowl News this week.

It’s yet unclear what that proposal would look like, or which AI regulations it would override.

TechNet, which advocates for tech companies including Google and Amazon, has previously argued that pausing state regulations would benefit smaller AI companies still getting on their feet and allow time for lawmakers develop a country-wide regulatory framework that “balances innovation with accountability.”

Why attempts at federal regulation have failed

Some Republicans in Congress have previously tried and failed to ban states from regulating AI.

Part of the challenge is that opposition is coming from their party’s own ranks.

Florida’s Republican governor, Ron DeSantis, said a federal law barring state regulation of AI was “Not acceptable” in a post on X this week.

DeSantis argued that the move would be a “subsidy to Big Tech” and would stop states from protecting against a list of things, including “predatory applications that target children” and “online censorship of political speech.”

A federal ban on states regulating AI is also unpopular, said Cody Venzke, senior policy council at the ACLU’s National Political Advocacy Department.

“The American people do not want AI to be discriminatory, to be unsafe, to be hallucinatory,” he said. “So I don’t think anyone is interested in winning the AI race if it means AI that is not trustworthy.”

Humanoid robots take center stage at Silicon Valley summit, but skepticism remains

posted in: All news | 0

By MATT O’BRIEN

MOUNTAIN VIEW, Calif. (AP) — Robots have long been seen as a bad bet for Silicon Valley investors — too complicated, capital-intensive and “boring, honestly,” says venture capitalist Modar Alaoui.

But the commercial boom in artificial intelligence has lit a spark under long-simmering visions to build humanoid robots that can move their mechanical bodies like humans and do things that people do.

Alaoui, founder of the Humanoids Summit, gathered more than 2,000 people this week, including top robotics engineers from Disney, Google and dozens of startups, to showcase their technology and debate what it will take to accelerate a nascent industry.

Alaoui says many researchers now believe humanoids or some other kind of physical embodiment of AI are “going to become the norm.”

“The question is really just how long it will take,” he said.

Disney’s contribution to the field, a walking robotic version of “Frozen” character Olaf, will be roaming on its own through Disneyland theme parks in Hong Kong and Paris early next year. Entertaining and highly complex robots that resemble a human — or a snowman — are already here, but the timeline for “general purpose” robots that are a productive member of a workplace or household is farther away.

Even at a conference designed to build enthusiasm for the technology, held at a Computer History Museum that’s a temple to Silicon Valley’s previous breakthroughs, skepticism remained high that truly humanlike robots will take root anytime soon.

“The humanoid space has a very, very big hill to climb,” said Cosima du Pasquier, founder and CEO of Haptica Robotics, which works to give robots a sense of touch. “There’s a lot of research that still needs to be solved.”

The Stanford University postdoctoral researcher came to the conference in Mountain View, California, just a week after incorporating her startup.

“The first customers are really the people here,” she said.

Researchers at the consultancy McKinsey & Company have counted about 50 companies around the world that have raised at least $100 million to develop humanoids, led by about 20 in China and 15 in North America.

China is leading in part due to government incentives for component production and robot adoption and a mandate last year “to have a humanoid ecosystem established by 2025,” said McKinsey partner Ani Kelkar. Displays by Chinse firms dominated the expo section of this week’s summit, held Thursday and Friday. The conference’s most prevalent humanoids were those made by China’s Unitree, in part because researchers in the U.S. buy the relatively cheap model to test their own software.

In the U.S., the advent of generative AI chatbots like OpenAI’s ChatGPT and Google’s Gemini has jolted the decades-old robotics industry in different ways. Investor excitement has poured money into ambitious startups aiming to build hardware that will bring a physical presence to the latest AI.

But it’s not just crossover hype — the same technical advances that made AI chatbots so good at language have played a role in teaching robots how to get better at performing tasks. Paired with computer vision, robots powered by “visual-language” models are trained to learn about their surroundings.

One of the most prominent skeptics is robotics pioneer Rodney Brooks, a co-founder of Roomba vacuum maker iRobot who wrote in September that “today’s humanoid robots will not learn how to be dexterous despite the hundreds of millions, or perhaps many billions of dollars, being donated by VCs and major tech companies to pay for their training.” Brooks didn’t attend but his essay was frequently mentioned.

Also missing was anyone speaking for Tesla CEO Elon Musk’s development of a humanoid called Optimus, a project that the billionaire is designing to be “extremely capable” and sold in high volumes. Musk said three years ago that people can probably buy an Optimus “within three to five years.”

The conference’s organizer, Alaoui, founder and general partner of ALM Ventures, previously worked on driver attention systems for the automotive industry and sees parallels between humanoids and the early years of self-driving cars.

Near the entrance to the summit venue, just blocks from Google’s headquarters, is a museum exhibit showing Google’s bubble-shaped 2014 prototype of a self-driving car. Eleven years later, robotaxis operated by Google affiliate Waymo are constantly plying the streets nearby.

Some robots with human elements are already being tested in workplaces. Oregon-based Agility Robotics announced shortly before the conference that it is bringing its tote-carrying warehouse robot Digit to a Texas distribution facility run by Mercado Libre, the Latin American e-commerce giant. Much like the Olaf robot, it has inverted legs that are more birdlike than human.

Industrial robots performing single tasks are already commonplace in car assembly and other manufacturing. They work with a level of speed and precision that’s difficult for today’s humanoids — or humans themselves — to match.

The head of a robotics trade group founded in 1974 is now lobbying the U.S. government to develop a stronger national strategy to advance the development of homegrown robots, be they humanoids or otherwise.

“We have a lot of strong technology, we have the AI expertise here in the U.S.,” said Jeff Burnstein, president of the Association for Advancing Automation, after touring the expo. “So I think it remains to be seen who is the ultimate leader in this. But right now, China has certainly a lot more momentum on humanoids.”

Associated Press journalist Terry Chea contributed to this report.

Mahtomedi woman dies after being struck by vehicle on I-94

posted in: All news | 0

A Mahtomedi woman was struck and killed by a vehicle on Monday night as she was walking along Interstate 94 near Black River Falls, Wis., authorities said.

Kara Ann Meslow, 30, was struck around 5:50 p.m. Monday, according to the Wisconsin State Patrol.

Authorities did not give details about why Meslow was walking along the highway. She was pronounced deceased at the scene.

The crash remains under investigation, authorities said Friday.

Related Articles


New Richmond: 8-year-old dies in crash on icy curve


20-year sentence for New Richmond man, 71, who strangled wife after ‘things got out of hand’


Superior, Wis., officer shot; suspect crashes into middle school


St. Croix County hires law firm in solar farm review


Democrat Mandela Barnes, a former US Senate candidate, enters the Wisconsin governor’s race