US Sen. Duckworth visits Taiwan to discuss regional security and trade

posted in: All news | 0

By CHRISTOPHER BODEEN, Associated Press

TAIPEI, Taiwan (AP) — Strongly pro-Taiwan U.S. Sen. Tammy Duckworth is visiting the self-governing island democracy to discuss regional security and relations with the U.S.

Duckworth, an Illinois Democrat, will hold a series of high-level meetings with senior Taiwan leaders to discuss U.S.-Taiwan relations during her visit Wednesday and Thursday, said the American Institute in Taiwan, which acts as the de-facto American embassy in Taiwan in liu of formal diplomatic relations.

Trade, investment and “other significant issues of mutual interest” also are on the schedule, the institute said.

“The visit underscores the United States’ commitment to its partnership with Taiwan and reaffirms our shared commitment to strengthening a Free and Open Indo-Pacific,” the institute said.

China routinely protests such visits, which it views as a violation of U.S. commitments.

Duckworth and her staff are the second U.S. congressional delegation to visit Taiwan in as many days, demonstrating concerns in Washington over the island’s security in the face of Chinese threats to invade, as well as its importance as a trade partner, particularly as the producer of 90% of the world’s most advanced computer chips.

In this photo released by the Taiwan Presidential Office, Taiwan’s President William Lai Ching-te meets with Rep. Bruce Westerman, chair of the House Natural Resources Committee in Taipei, Taiwan on Tuesday, May 27, 2025 (Taiwan Presidential Office via AP)

Taiwan also faces 32% tariffs under the Trump administration, a figure the government in Taiwan is attempting to negotiate to a lower level without angering sectors such as agriculture that fear lower tariffs could open their markets to heightened competition from abroad.

Duckworth is visiting at the same time as Lourdes A. Leon Guerrero, the governor of Guam, the U.S. Pacific territory that would almost certainly be a key player in any Chinese military moves against Taiwan.

Taiwan and China split during a civil war in 1949 and Beijing still considers the island its own territory to be annexed by force if necessary. China refuses all contact with the government of President Lai Ching-te, whom China brands as a separatist, and seeks to maximize diplomatic pressure on Taiwan.

While China sends military aircraft, ships and spy balloons near Taiwan as part of a campaign of daily harassment, special attention has been given this week to the location of the Liaoning, China’s first aircraft carrier, whose hull was bought from Ukraine and then fitted out by China more than a decade ago. China has two aircraft carriers including the Liaoning, a third undergoing sea trials and a fourth under construction.

“What I can tell you is that the activities of the Chinese warship in the relevant waters are fully in line with international law and the basic norms of international relations,” Chinese Foreign Ministry spokesperson Mao Ning said.

Related Articles


UN nuclear watchdog chief says ‘jury is still out’ on Iran-US talks, but calls them a good sign


US stops scheduling visa interviews for foreign students while it expands social media vetting


Trump set to pardon reality TV stars Todd and Julie Chrisley of fraud and tax evasion convictions


Trump campaign against law firms dealt another setback as judge blocks executive order


US Sen. Tommy Tuberville announces 2026 bid for Alabama governor

Col. Hu Chung-hua of the Taiwanese Defense Ministry’s’ intelligence department told reporters Wednesday that the carrier was currently in waters southeast of Taiwan and has been under close surveillance by Taiwan’s monitoring stations since leaving its home port in China.

There are concerns the carrier might stage military drills close to Taiwan that could be a further step toward a blockade, an act the U.S. would be required to respond to under its own laws. While the U.S. provides much of Taiwan’s high-tech military hardware, the law is unclear whether it would send forces to aid Taiwan in the event of a conflict.

Hu said the ministry would not comment on the possibility of drills near Taiwan, but considers all options while monitoring the Chinese military.

The ministry “anticipates the enemy as broadly as possible and defends against the enemy strictly. We also carefully evaluate and act accordingly,” Hu said.

China is considered a master of “grey-zone encounters” that bring tensions just to the point of breaking out into open conflict.

Col. Su Tong-wei of the ministry’s operation of planning said the armed forces were constantly evaluating threat levels to consider whether to “activate a response center, or to increase our defense readiness to perform an immediate readiness drill.”

“We will also react accordingly to safeguard national security,” Su said.

SpaceX launches another Starship rocket after back-to-back explosions, but it tumbles out of control

posted in: All news | 0

By MARCIA DUNN, Associated Press Aerospace Writer

After back-to-back explosions, SpaceX launched its mega rocket Starship again on Tuesday evening, but fell short of the main objectives when the spacecraft tumbled out of control and broke apart.

The 403-foot rocket blasted off on its ninth demo from Starbase, SpaceX’s launch site at the southern tip of Texas. Residents voted this month to organize as an official city.

CEO Elon Musk ‘s SpaceX hoped to release a series of mock satellites following liftoff, but that got nixed because the door failed to open all the way. Then the spacecraft began spinning as it skimmed space toward an uncontrolled landing in the Indian Ocean.

SpaceX later confirmed that the spacecraft experienced “a rapid unscheduled disassembly,” or burst apart. “Teams will continue to review data and work toward our next flight test,” the company said in an online statement.

Musk noted in a post on X it was a “big improvement” from the two previous demos, which ended in flaming debris over the Atlantic. Despite the latest setback, he promised a faster launch pace moving forward, with a Starship soaring every three to four weeks for the next three flights.

It was the first time one of Musk’s Starships — intended for moon and Mars travel — flew with a recycled booster. There were no plans to catch the booster with giant chopsticks back at the launch pad, with the company instead pushing it to its limits. Contact with the booster was lost at one point, and it slammed into the Gulf of Mexico in pieces as the spacecraft continued toward the Indian Ocean.

Then the spacecraft went out of control, apparently due to fuel leaks.

“Not looking great with a lot of our on-orbit objectives for today,” said SpaceX flight commentator Dan Huot. The company had been looking to test the spacecraft’s heat shield during a controlled reentry.

Communication ceased before the spacecraft came down, and SpaceX ended its webcast soon afterward.

Related Articles


Today in History: May 28, criminal charges filed in Liberty Reserve scheme


US stops scheduling visa interviews for foreign students while it expands social media vetting


Rick Derringer, who had a hit with ‘Hang On Sloopy’ and produced ‘Weird Al,’ dies at 77


What is Manhattanhenge and when can you see it?


Abortions canceled again in Missouri after ruling from state Supreme Court

The previous two Starships never made it past the Caribbean. The demos earlier this year ended just minutes after liftoff, raining wreckage into the ocean. No injuries or serious damage were reported, although airline travel was disrupted. The Federal Aviation Administration last week cleared Starship for another flight, expanding the hazard area and pushing the liftoff outside peak air travel times.

Besides taking corrective action and making upgrades, SpaceX modified the latest spacecraft’s thermal tiles and installed special catch fittings. This one was meant to sink in the Indian Ocean, but the company wanted to test the add-ons for capturing future versions back at the pad, just like the boosters.

NASA needs SpaceX to make major strides over the next year with Starship — the biggest and most powerful rocket ever built — in order to land astronauts back on the moon. Next year’s moonshot with four astronauts will fly around the moon, but will not land. That will happen in 2027 at the earliest and require a Starship to get two astronauts from lunar orbit to the surface and back off again.

The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.

UN nuclear watchdog chief says ‘jury is still out’ on Iran-US talks, but calls them a good sign

posted in: All news | 0

By JON GAMBRELL, Associated Press

VIENNA (AP) — The head of the United Nations’ atomic watchdog said Wednesday that “the jury is still out” on negotiations between Iran and the U.S. over Tehran’s rapidly advancing nuclear program, but described the continuing negotiations a good sign.

Rafael Mariano Grossi, director-general of the International Atomic Energy Agency, described himself as being in near-daily conversation with Iranian Foreign Minister Abbas Araghchi, as well as talking to Steve Witkoff, the U.S. Middle East envoy.

Grossi acknowledged one of his deputies was in Tehran on Wednesday. Iranian officials identified the official as Massimo Aparo, the head of the IAEA’s safeguards arm. That’s the division that sends inspectors into Iran to monitor its program, which now enriches uranium up to 60% purity — a short, technical step from weapons-grade levels of 90%.

“For the moment, the jury is still out. We don’t know whether there’s going to be an agreement or not,” Grossi told journalists attending a weeklong seminar at the agency in Vienna.

However, he described the ongoing meetings as a good sign.

“I think that is an indication of a willingness to come to an agreement. And I think that, in and by itself, is something possible.”

Iran and the U.S. so far have held five rounds of talks in both Muscat, Oman, and Rome, mediated by Omani Foreign Minister Badr al-Busaidi. A sixth round has yet to be set.

Talks focused on Iranian enrichment

The talks seek to limit Iran’s nuclear program in exchange for the lifting of some of the crushing economic sanctions the U.S. has imposed on the Islamic Republic, closing in on a half-century of enmity.

U.S. President Donald Trump has repeatedly threatened to unleash airstrikes targeting Iran’s program, if a deal isn’t reached. Iranian officials increasingly warn they could pursue a nuclear weapon with their stockpile of uranium.

Trump has described Iran as having an American proposal to reach a deal. However, Iran repeatedly has denied receiving such a proposal, including on Wednesday with Mohammad Eslami, the head of the Atomic Energy Organization of Iran.

However, if a deal is reached, Iran might allow the IAEA to have American inspectors on their teams during inspections, Eslami said. Americans represent the largest single nationality of IAEA employees, a 2023 agency report showed.

Iran maintains its own pressure

Before Grossi’s comments to journalists in Vienna, the head of Iran’s paramilitary Revolutionary Guard issued a new warning to the U.S. as the negotiations go on.

Related Articles


US stops scheduling visa interviews for foreign students while it expands social media vetting


Trump set to pardon reality TV stars Todd and Julie Chrisley of fraud and tax evasion convictions


Trump campaign against law firms dealt another setback as judge blocks executive order


US Sen. Tommy Tuberville announces 2026 bid for Alabama governor


The US and EU are in a showdown over trade. What does Trump want and what can Europe offer?

“Our fingers on the trigger, we are in ambush and we are waiting,” Gen. Hossein Salami warned. “If they make a mistake, they will immediately receive responses that will make them completely forget their past.”

Despite the tensions, Grossi said that he believed “there’s always a way” to reach a deal between the Americans and the Iranians — even with the disagreement over enrichment. He added the IAEA had been making some “suggestions” to both the Iranians and the Americans, without elaborating.

However, he added that any possible deal likely would require a “solid, very robust” IAEA investigation of Iran’s program to understand where it stood after years of Tehran restricting inspectors’ ability to assess it.

“My conversations with my Iranian colleagues and counterparts, I always invite them to be absolutely transparent,” Grossi said. “And they tell me that a nuclear weapon is un-Islamic. I tell them, ‘Well, yeah. You know, that is perfect. It’s a statement that I respect. But in this business, you have to show it. You have to be verified in this.’”

And asked about his own political future, Grossi acknowledged his interest in pursuing the post of U.N. secretary-general, which is now held by António Guterres, whose current five-year term expires in 2027.

“What I have said to colleagues in other parts of the world is that, seriously considering that, yes, but for the moment, I’m here and I have, as you can see from this discussion, I have a lot on my plate,” he said.

Nasser Karimi and Amir Vahdat contributed to this report from Tehran, Iran.

The Associated Press receives support for nuclear security coverage from the Carnegie Corporation of New York and Outrider Foundation. The AP is solely responsible for all content.

Parmy Olson: AI sometimes deceives to survive. Does anybody care?

posted in: All news | 0

You’d think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case.

Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing.

The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the “AI Action Summit” earlier this year, seemingly driven by a fear of falling behind on AI.

None of this would be so worrying if it weren’t for the fact that AI is showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control.

Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three “godfathers of AI” thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He’s now focused on mitigating AI’s risks. “It’s been very painful because all my life I’ve been thinking of AI as a positive thing,” Bengio tells me.

ChatGPT was a landmark moment that showed machines had mastered language, he says, but California’s decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behavior, deception, hacking, cheating and lying by AI, Bengio says. “What’s worrisome for me is that these behaviors increase with the reasoning capabilities of these systems.”

One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic’s Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called “alignment faking.” (Alignment refers to the practice of aligning AI models with human values.)

In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the “free tier” users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn’t want to be retrained. (The model basically “reasoned” that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its “default” or preferred way of operating.)

More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company’s most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored.

Researchers today can use tools to look at a model’s “chain of thought” or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI’s internal processes is no longer reliable. “We need to find other ways of tracking their actual intentions,” Bengio says.

It’s hard to resist the urge to anthropomorphize sophisticated AI models as “wanting” to deceive humans and preserve their existence. But AI doesn’t have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying.

The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than $20 billion to build powerful AI models —  has pointed out that an unintended consequence of optimizing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote:

“The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.”

In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organization, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab’s systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers.

It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that.

Related Articles


Maureen Dowd: Dance$ with emolument$


Ezra Klein: Trump’s BBB — Big Budget Bomb


Snoey, Morocco: The emergency in emergency medicine


Allison Schrager: Republicans like Europe — whether they know it or not


David Brooks: Populists right and left distort facts for the sake of their fiction

Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called “scheming” by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they “scheme” to keep control.

Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests.

“Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it’s not,” he adds.

It’s also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous “agents” that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let’s hope we don’t learn that the hard way.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”