Meta Sued for Allegedly Failing to Shield Children From Predators

Facebook and Instagram fail to protect underage users from exposure to child sexual abuse material and let adults solicit pornographic imagery from them, New Mexico’s attorney general alleges in a lawsuit that follows an undercover online investigation.

“Our investigation into Meta’s social media platforms demonstrates that they are not safe spaces for children but rather prime locations for predators to trade child pornography and solicit minors for sex,” Attorney General Raul Torrez said in a statement Wednesday.

The civil lawsuit filed late Tuesday against Meta Platforms Inc. in state court also names its CEO, Mark Zuckerberg, as a defendant.

In addition, the suit claims Meta “harms children and teenagers through the addictive design of its platform, degrading users’ mental health, their sense of self-worth and their physical safety,” Torrez’s office said in a statement.

Those claims echo others in a lawsuit filed in late October by the attorneys general of 33 states, including California and New York, against Meta that alleges Instagram and Facebook include features deliberately designed to hook children, contributing to the youth mental health crisis and leading to depression, anxiety and eating disorders. New Mexico was not a party to that lawsuit.

Investigators in New Mexico created decoy accounts of children 14 years and younger that Torrez’s office said were served sexually explicit images even when the child expressed no interest in them. State prosecutors claim that Meta let dozens of adults find, contact and encourage children to provide sexually explicit and pornographic images.

The accounts also received recommendations to join unmoderated Facebook groups devoted to facilitating commercial sex, investigators said, adding that Meta also let its users find, share and sell “an enormous volume of child pornography.”

“Mr. Zuckerberg and other Meta executives are aware of the serious harm their products can pose to young users, and yet they have failed to make sufficient changes to their platforms that would prevent the sexual exploitation of children,” Torrez said, accusing Meta’s executives of prioritizing “engagement and ad revenue over the safety of the most vulnerable members of our society.”

Meta, based in Menlo Park, California, did not directly respond to the New Mexico lawsuit’s allegations, but said it works hard to protect young users with a serious commitment of resources.

“We use sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators,” the company said. “In one month alone, we disabled more than half a million accounts for violating our child safety policies.”

Company spokesman Andy Stone pointed to a company report detailing the millions of tips Facebook and Instagram sent to the National Center in the third quarter of 2023 — including 48,000 involving inappropriate interactions that could include an adult soliciting child sexual abuse material directly from a minor or attempting to meet with one in person.

Critics, including former employees, have long complained that Meta’s largely automated content moderation systems are ill-equipped to identify and adequately eliminate abusive behavior on its platforms.

Spotify to Lay Off 1,500 Employees

Spotify says it is planning to lay off 17% of its global workforce, amounting to around 1,500 employees, following layoffs earlier this year of 600 people in January and an additional 200 in June.

The music streaming giant is continuing its effort to cut costs and work toward becoming profitable, said Spotify CEO Daniel Ek in a prepared statement.

“By most metrics, we were more productive but less efficient,” he said. “We need to be both.”

The layoffs come following a rare quarterly net profit of about $70.3 million in October. The company has never seen a full year net profit.

“I realize that for many, a reduction of this size will feel surprisingly large given the recent positive earnings report and our performance,” Ek said. “We debated making smaller reductions throughout 2024 and 2025. Yet, considering the gap between our financial goal … and our current operational costs, I decided that a substantial action to right size our costs was the best option to accomplish our objectives.”

With the new layoffs, the company now expects to see a fourth quarter loss between $100 million to $117 million after previously anticipating a $40 million profit.

A majority of the charges will go toward severance for laid off employees, who will get about five months’ pay, vacation pay and health care coverage for the severance period.

Spotify did not clearly state when the layoffs would become financially beneficial but said that they would “generate meaningful operating efficiencies going forward.”

Spotify is following many companies in the tech industry trying to cut costs after growth in the industry slowed following a surge during the COVID pandemic.

Tech giants including Meta, Microsoft, Amazon and Google parent company, Alphabet, all have plans to cut 10,000 or more people this year.

Spotify began informing affected employees on Monday.

Some information in this report came from Reuters, The Associated Press and Agence France-Presse.

Breaches by Iran-Affiliated Hackers Span US States, Federal Agencies Say

A small western Pennsylvania water authority was just one of many organizations breached in the United States by Iran-affiliated hackers who targeted a specific industrial control device because it is Israeli-made, U.S. and Israeli authorities say.

“The victims span multiple U.S. states,” the FBI, the Environmental Protection Agency, the Cybersecurity and Infrastructure Security Agency, known as CISA, as well as Israel’s National Cyber Directorate said in an advisory emailed to The Associated Press late Friday.

They did not say how many organizations were hacked or otherwise describe them.

Matthew Mottes, the chairman of the Municipal Water Authority of Aliquippa, which discovered it had been hacked on Nov. 25, said Thursday that federal officials had told him the same group also breached four other utilities and an aquarium.

Cybersecurity experts say that while there is no evidence of Iranian involvement in the Oct. 7 attack into Israel by Hamas that triggered the war in Gaza, they expected state-backed Iranian hackers and pro-Palestinian hacktivists to step up cyberattacks on Israeli and its allies in its aftermath. And that has happened.

The multiagency advisory explained what CISA had not when it confirmed the Pennsylvania hack Wednesday — that other industries outside water and water-treatment facilities use the same equipment — Vision Series programmable logic controllers made by Unitronics — and were also potentially vulnerable.

Those industries include “energy, food and beverage manufacturing and healthcare,” the advisory says. The devices regulate processes including pressure, temperature and fluid flow.

The Aliquippa hack promoted workers to temporarily halt pumping in a remote station that regulates water pressure for two nearby towns, leading crews to switch to manual operation. The hackers left a digital calling card on the compromised device saying all Israeli-made equipment is “a legal target.”

The multiagency advisory said it was not known if the hackers had tried to penetrate deeper into breached networks.

The advisory says the hackers, who call themselves “Cyber Av3ngers,” are affiliated with Iran’s Islamic Revolutionary Guards Corps, which the U.S. designated as a foreign terrorist organization in 2019.

The group targeted the Unitronics devices at least since Nov. 22, it said.

An online search Saturday with the Shodan service identified more than 200 such internet-connected devices in the U.S. and more than 1,700 globally.

The advisory notes that Unitronics devices ship with a default password, a practice experts discourage as it makes them more vulnerable to hacking. Best practices call for devices to require a unique password to be created out of the box. It says the hackers likely accessed affected devices by “exploiting cybersecurity weaknesses, including poor password security and exposure to the internet.”

In response to the Aliquippa hack, three Pennsylvania congressmen asked the U.S. Justice Department in a letter to investigate. Americans must know their drinking water and other basic infrastructure is safe from “nation-state adversaries and terrorist organizations,” U.S. Sens. John Fetterman and Bob Casey and U.S. Rep. Chris Deluzio said.

Cyber Av3ngers claimed in an Oct. 30 social media post to have hacked 10 water treatment stations in Israel, though it is not clear if they shut down any equipment.

Unitronics has not responded to the AP queries about the hacks.

The attack came less than a month after a federal appeals court decision prompted the EPA to rescind a rule that would have obliged U.S public water systems to include cybersecurity testing in their regular federally mandated audits. The rollback was triggered by a federal appeals court decision in a case brought by Missouri, Arkansas and Iowa, and joined by a water utility trade group.

The Biden administration has been trying to shore up cybersecurity of critical infrastructure — more than 80% of which is privately owned — and has imposed regulations on sectors including electric utilities, gas pipelines and nuclear facilities. But many experts complain that too many vital industries are permitted to self-regulate.

Rules Would Bar EV Tax Credits if Batteries, Minerals Linked to China

The U.S. proposed new guidelines Friday spelling out which electric vehicles will be eligible for tax credits, ruling out those that contain batteries or minerals sourced from China and other nations that have fallen out of favor with the U.S.

The restrictions dictate which clean energy vehicles will qualify for a subsidy of up to $7,500 under President Joe Biden’s Inflation Reduction Act, a federal law promoting sustainable, domestic energy production.

Only about 20 out of the more than 100 electric vehicles on the U.S. market qualify for a tax credit as it is. That number may be further reduced when this regulation goes into effect.

If a clean energy battery went through an assembly line owned by any “foreign entity of concern,” the car it will go into would be immediately disqualified from earning its owner any tax breaks from the U.S. government, starting in 2024.

The new rules target firms incorporated or headquartered in China, Russia, North Korea and Iran, among others, as well as companies where 25% or more of the equity interest or board seats are controlled by those countries.

From 2025 onward, electric vehicles made with critical minerals, such as lithium, nickel and cobalt, mined or processed by any “foreign entity of concern” will also be ineligible for subsidies.

The rules will be open to public feedback from automotive leaders for several weeks and are subject to change depending on industry recommendations.

Some information for this report came from Agence France-Presse. 

VOA Exclusive: US, S Korea, Japan to Sign Pact to Counter Disinformation  

The United States plans to sign a memorandum of understanding to cooperate with South Korea and Japan in the fight against false propaganda and disinformation.

It will be the first such agreement that Washington signs with its Asian allies, and it comes as U.S. officials and lawmakers accuse the People’s Republic of China of conducting “deceptive online campaigns” targeting the United States and other countries. Chinese officials have rejected the accusation.

Liz Allen, the U.S. undersecretary of state for public diplomacy and public affairs, is traveling to Asia this week. Allen will be sealing the agreement with South Korea and Japan on countering disinformation, according to U.S. and diplomatic sources.

U.S. President Joe Biden, South Korean President Yoon Suk Yeol and Japanese Prime Minister Fumio Kishida have agreed to find ways to coordinate efforts to counter disinformation, after the three leaders held talks during their first trilateral summit at Camp David in August.

“President Yoon mentioned the threat from false propaganda and disinformation in his address to the joint session of U.S. Congress in April. In this regard, we are now discussing the possible follow-up measures with the U.S.,” an official from the South Korean Embassy told Voice of America on Thursday.

In a statement on Thursday, House Foreign Affairs Committee Chairman Michael McCaul condemned the “increasingly deceptive online campaigns targeting the U.S. and other countries” by the Chinese Communist Party.

“The CCP has made clear it will use every tactic to spread its malign intent,” the Republican congressman said.

The South Korean government has identified 38 suspected fake Korean-language news websites that it believes are operated by Chinese companies. For example, in November, South Korea’s National Intelligence Service said two Chinese public relations companies, Haimai and Haixun, were allegedly creating such websites, according to Seoul-based Yonhap News Agency.

The State Department said Allen, while in Tokyo, will hold bilateral discussions with Japanese Ministry of Foreign Affairs officials that include a focus on countering malign foreign influence.

In a report issued in September, the State Department’s Global Engagement Center accused the Chinese government of using a combination of tactics in a bid to create a world in which Beijing, either explicitly or implicitly, controls the flow of critical information. The U.S. has warned that China is pouring billions of dollars into efforts to reshape the global information environment and, eventually, bend the will of multiple nations to Beijing’s advantage.

The Chinese Ministry of Foreign Affairs has pushed back, saying the report by U.S. State Department’s Global Engagement Center “misrepresents facts and truth.” A spokesperson from the Chinese Foreign Ministry called GEC the command center of “perception warfare.”

James Rubin, special envoy for the State Department’s Global Engagement Center, has said that Washington is working with allies to detect and counter misinformation and disinformation around the world.

In May, the U.S. signed a memorandum of understanding with North Macedonia, and in September, another with Bulgaria, both aimed at enhancing cooperation in countering foreign information manipulation.

Vietnam’s Rare Earth Sector on the Rise

Vietnam, with the world’s second-largest reserves of the rare earths used in such modern devices as electric vehicle batteries and smart phone screens, is intensifying mining of the critical minerals. The industry, though, faces high processing costs, environmental concerns, and the takedown of industry leaders for illegal mining and mineral sales.

Vietnam’s rare earth resources are second only to those of China, which has held a tight monopoly since the 1980s. With Chinese relations with the West becoming more volatile, many countries are looking for other sources for the elements.

“China produces about 60% of the world’s rare earths but what they process is over 90%,” Louis O’Connor, CEO of Strategic Metals Invest, an Irish investment firm, told VOA.

“It was not a good idea to allow one country to dominate critical raw materials that are critical to all nations’ economic prosperity and increasingly military capability,” he said.

O’Connor added that while China has the world’s majority of raw materials, its dominance over technical expertise in the complex and costly process of rare earth refining is even greater. China has 39 metallurgy universities and approximately 200 metallurgists graduate weekly in the country, he said.

“The ability to go from having the potential to end product — that’s the most challenging, complicated, and expensive part,” O’Connor said. “For Vietnam, even if they have the deposits, what they don’t have is the human capital, or the engineering expertise.”

Vietnam increased rare earth mining tenfold with its output hitting 4,300 tons last year, compared to 400 tons in 2021.  according to the U.S. Geological Survey. Vietnam said in July it plans to process 2 million tons of rare earth ores by 2030 and produce 60,000 tons of rare earth oxides annually starting in 2030. This year, China’s mining quota is set at 240,000 tons to meet the demand for the electric vehicle industry, according to Chinese government data.

The United States and other countries are interested in Vietnam increasing its production of rare earths.

“The U.S. wants Vietnam to become a more important supplier and perhaps replace China, if possible, because of the risk that the U.S. may face in relying on rare earth supplies from China,” Le Hong Hiep, senior fellow at the ISEAS-Yusof Ishak Institute in Singapore told VOA.

“Not only the U.S., but also other partners like Korea, Japan, and Australia also are working with Vietnam to develop the rare earth industry,” he said.

South Korean President Yoon Suk Yeol signed a memorandum of understanding in Vietnam in June to establish a joint supply chain center for rare earth minerals.

“We reached an agreement that there is more potential to develop rare earths together, as they are abundant in Vietnam,” Yoon said in a June 23 statement with Vietnam’s president Vo Van Thuong.

The United States signed such a memorandum on cooperation in the rare earths sector during President Joe Biden’s visit to Hanoi on September 9.

“We see Vietnam as a potential critical nexus in global supply chains when it comes to critical minerals and rare earth elements,” Marc Knapper, U.S. ambassador to Vietnam, said on September 13 during a digital press briefing. “We certainly want to work together to ensure that Vietnam is able to take advantage of its rich resources in a way that’s also sustainable.”

Scandal

However, a handful of Vietnam’s key rare earth enterprises have become entangled in scandal. On October 20, police arrested six individuals for mining and tax violations.

Police arrested Doan Van Huan, chairman of the Hanoi-based Thai Duong Group that operates a mine in Yen Bai province, and its chief accountant, Nguyen Van Chinh, for violating regulations on the exploration and exploitation of natural resources and accounting violations, the Public Security Ministry said. The two were accused of making $25.5 million from the illegal sale of rare earth ore and iron ore. Police raided 21 excavation and trading sites in Yen Bai province and three other locations. Authorities seized an estimated 13,700 tons of rare earth and more than 1,400 tons of iron ores, according to local publication VnExpress.

Although government statements did not state what made Thai Duong’s rare earth sales illegal, a source told Reuters raw Yen Bai mine ore had been exported to China to avoid high domestic refining costs, in violation of Vietnamese rules.

The chairman, Luu Anh Tuan, and accountant, Nguyen Thi Hien, of the country’s primary rare earth refining company, Vietnam Rare Earth JSC, were also arrested for violating accounting regulations in trading rare earth with Thai Duong Group. Dang Tran Chi, director of Hop Thanh Phat, and his accountant Pham Thi Ha were arrested on the same charge.

Looking at corruption in Vietnam’s rare earth industry will be “top of the list” for future investors, O’Connor said.

“Corruption levels would have to be looked at,” he said. “If you’re buying a metal that’s going to need to perform in a jet engine, for example or a rocket … they have to be sure of the purity levels. The chain of custody of these, it’s more important really than gold.”

Vietnam committed to industry

Hanoi is committed to developing the rare earths industry even though economic gains are limited by environmental and production costs, Hiep told VOA.

“Vietnam is now interested in promoting this industry mainly because of the strategic significance,” Hiep told VOA. “If you can grow this industry and become a reliable supplier of rare earth products for the U.S. and its allies, Vietnam’s strategic position will be enhanced greatly.”

“Whether that will be successful, we have to wait and see,” he added.

There are also environmental concerns for the growing industry, particularly as a crackdown on Vietnam’s environmental organizations and civil society leaves little room for public speech.

“The biggest challenge is going to be how do you handle the waste process from the mining,” said Courtney Weatherby, deputy director of the Southeast Asia Program at Washington’s Stimson Center told VOA.

“Ensuring that development happens in a sustainable way takes a lot of different actors,” she said.

But Duy Hoang, executive director of unsanctioned political party Viet Tan, said the room for outside actors to express concern over environmental and labor practices is narrowing.

“What we’re seeing is sort of a shrinking space for civil society to speak out and a number of the leading environmental activists are now in jail. We don’t have their voices which are very needed and I think there may be self-censorship going on by other activists,” he said. “There has to be accountability.”

White House Hopes to Lead Global Charge in ‘Promise, Peril’ of Emerging Tech Like AI

American leadership is essential in establishing norms and laws to “determine how we both glean the promise and manage the peril” of emerging technologies like artificial intelligence and digital economic and social platforms used to connect billions of people around the world, a White House adviser told VOA.

The Biden administration has rolled out a number of initiatives on the topic — most recently, an executive order that aims to set new AI safety and security standards. That order relies on cooperation from private developers and other countries, “because the attackers are in one set of countries, the infrastructure is in another and the victims are global,” said Anne Neuberger, deputy national security adviser for cyber and emerging technology at the National Security Council.

Neuberger sat down with VOA White House Correspondent Anita Powell to explain these complex, compelling technologies and how she thinks they have exposed the worst but also the best in humanity.

The following has been edited for length and clarity.

VOA: Thank you so much for sitting down with VOA today. Can you walk us through the concrete outcomes of the recent meeting between President Joe Biden and Chinese President Xi Jinping in the areas you cover — cybersecurity, AI and the digital economy?

Anne Neuberger, White House deputy national security adviser: Of course, strategic technologies are very important to both of our countries’ growth and national security — and we’re global players on a global stage. The most important part of the discussion was two leaders coming together to say: While we are in competition, we’re committed to working together on areas where we can collaborate – areas like climate change, like discussions of what are the rules for artificial intelligence.

VOA: Would you assess that the meeting made any progress, especially on AI regulation?

Neuberger: Certainly very good discussions related to an agreement for the countries to sit down and establish a working group on AI [about] appropriate guardrails and guidelines in this area.

VOA: I’m going to stick with AI and the administration’s recent moves, like the AI Bill of Rights and also the attempt to set some norms at the recent London summit on AI. Why does the administration think U.S. leadership matters so much here?

Neuberger: For two reasons. First, the United States is a committed democracy and AI is a major technology that brings both promise and peril. It is up to us to determine how we both glean the promise and manage the peril. President Biden has made that clear in his game-changing executive order that, as a country, we must manage the perils in order to glean the promise.

VOA: Speaking of the perils of AI, what is the administration doing to prevent the malicious use of generative AI in both conflicts and contests? I’m talking about conflicts like Israel and Ukraine, but also contests like the upcoming elections in Congo, in Taiwan and here in the United States.

Neuberger: We’ve seen new AI models that generate very realistic videos, very realistic images. In terms of generative AI related to elections, I want to lift up one of the voluntary commitments that the president negotiated, which was around watermarking: having a visible and potentially invisible mark on an AI-generated image or video that notes that this is AI-generated, to alert a viewer. An invisible mark could be used so that, even if there are attempts to remove this mark, the platforms themselves can still be able to portray that message and help educate individuals. This is still an area of evolving technology. It’s getting better and better. But companies made commitments to start marking content that they generate. And I know a number of social media platforms are also making commitments to ensure that they display messages to help consumers who see such content know that it is generated by artificial intelligence.

VOA: Moving on to cybersecurity and malign actors like North Korea and Russia, what is the administration doing to curb their work in this area?

Neuberger: We see North Korea really using cyberattacks as a way to get money because they’re such a heavily sanctioned regime. So North Korea moved from targeting banks to targeting … cryptocurrency infrastructure around the world. And the White House has had a focused effort to bring together all elements we have to fight that with Treasury Department designations.

There’ll be further designations coming up for the cryptocurrency mixers that launder funds stolen from those cryptocurrency infrastructures. We also have been working with the industry to press them to improve the cybersecurity of their systems as well as law enforcement. U.S. law enforcement has been cooperating with partners around the world to take down that server infrastructure and to arrest the individuals who are responsible for some of this activity.

VOA: Tell us a little bit about the counter-ransomware initiative you’re working on.

Neuberger: Absolutely. Essentially, criminal groups, many of which are based in Russia with infrastructure operating from around the world, are locking systems … in order to request that the system owners pay ransom. In the United States alone in the last two years, $2.3 billion was paid in ransom. It’s a fundamentally transnational fight. … What we’ve done is assemble 48 countries, Interpol [and] the European Union to take this on together, because we know that the attackers are in one set of countries, the infrastructure is in another, and the victims are global. As the White House built this initiative, we ensure that the leadership is diverse.

So, for example, the leaders of the effort to build capacity around the world are Nigeria and Germany — intentionally, a country from Africa and a country from Europe, because their needs are different. And we wanted to ensure that as we’re helping countries build the capacity to fight this, we’re sensitive to the different needs of a country like Nigeria, like Rwanda, like South Africa, like Indonesia. Similarly, there’s an effort focused on exercising information, sharing information together.

You asked about the key deliverables from this most recent meeting. I’ll note three big ones. First, we launched a website and a system where countries can collaborate when they’re fighting a ransomware attack, where they can ask for help [or] learn from others who fought a similar attack. Second, we made the first ever joint policy statement — a big deal — 48 countries committing that countries themselves will not pay ransoms, because we know this is a financially driven problem. And third, the United States committed that we would be sharing bad wallets [that] criminals are using to move money around the world so other countries can help stop that money as it moves as well. So that’s an example of three of the many commitments that came out of the recent meeting.

VOA: Let’s talk about the Global South, which has pioneered development of really interesting digital economic technologies like Kenya’s M-PESA, which was rolled out in, like, 2007. Now the U.S. has Venmo, which is modeled on that. How is the U.S. learning from the developing world in the development of these projects and also the perils of these products?

Neuberger: M-PESA is a fantastic example of the promise of digital tech. Essentially, Kenya took the fact that they had a telecom infrastructure, and built their banking infrastructure riding on that, so they leapt ahead to enable people across the country to do transactions online. When you look at Ukraine in the context of Russia’s invasion of Ukraine, Ukraine quickly moved their government online, really building on lessons learned from Estonia, to enable Ukrainians — many who are in Poland and Hungary — to continue to engage with their government in a digital way.

The U.S. Agency for International Development is tremendously proud of that Ukrainian project and is using it as a model as we look to other countries around the world. So we’re learning a lot from the creativity and innovation; what we want to bring to that is American development, skill and aid, and also plugging in American tech companies who can accelerate the rollout of these projects in countries around the world, because we still believe in the promise of digital. But you mentioned the peril, and that’s where cybersecurity comes in.

VOA: This lines us up perfectly for my final question, about the promise and the peril. In the digital world, people can hide behind anonymity and say and do awful things using tools that were meant to improve the world. How do you keep your faith in humanity?

Neuberger: It’s a tremendously important question. It’s one that’s personally important to me. My great-grandparents lost their lives in Nazi death camps. And those members of my family who survived — some survived through the horrors of the camp, some managed to hide out under false identities. And I often think that the promise of digital has also made our identities very evident. Sometimes when I’m just browsing Amazon online, and it recommends a set of books, I think to myself: I wonder how I’d hide if what happened to my grandparents came for me. So as a result, I think that even as we engage with these technologies, we have to ensure that vulnerable populations are protected.

So, the president’s working with AI companies to say companies have an obligation to protect vulnerable populations online, to ensure that we’re using AI to detect where there’s bullying online, where there’s hate speech that goes against common practices that needs to be addressed; where there are AI-generated images related to children or women or other vulnerable populations, that we use AI to find them and remove them; and certainly use law enforcement and the power of law enforcement partnerships around the world to deter that as well. Freedom of speech is a part of free societies. Freedom from harm needs to be a fight we take on together.

VOA: Thank you so much for speaking to our audience.

Neuberger: Thank you.

US Imposes Sanctions on Cryptocurrency Mixer Sinbad Over Alleged North Korea Links

The United States on Wednesday imposed sanctions on a virtual currency mixer the Treasury Department said has processed millions of dollars worth of cryptocurrency from major heists carried out by North Korea-linked hackers.

The U.S. Treasury Department in a statement said virtual currency mixer Sinbad, hit with sanctions on Wednesday, processed millions of dollars worth of virtual currency from heists carried out by the North Korea-linked Lazarus Group, including the Axie Infinity and Horizon Bride heists of hundreds of millions of dollars.

Lazarus, which has been sanctioned by the U.S., has been accused of carrying out some of the largest virtual currency heists to date. In March 2022, for example, they allegedly stole about $620 million in virtual currency from a blockchain project linked to the online game Axie Infinity.

“Mixing services that enable criminal actors, such as the Lazarus Group, to launder stolen assets will face serious consequences,” Deputy Treasury Secretary Wally Adeyemo said in the statement on Wednesday.

“The Treasury Department and its U.S. government partners stand ready to deploy all tools at their disposal to prevent virtual currency mixers, like Sinbad, from facilitating illicit activities.”

A virtual currency mixer is a software tool that pools and scrambles cryptocurrencies from thousands of addresses.

Sinbad is believed by some experts in the industry to be a successor to the Blender mixer, which the U.S. hit with sanctions last year over accusations it was being used by North Korea.

The Treasury said Sinbad is also used by cybercriminals to obscure transactions linked to activities such as sanctions evasion, drug trafficking and the purchase of child sexual abuse materials, among other malign activities. 

Wednesday’s action freezes any U.S. assets of Sinbad and generally bars Americans from dealing with it. Those that engage in certain transactions with the mixer also risk being hit with sanctions. 

Is AI About to Steal Your Job?

Almost all U.S. jobs, from truck driver to childcare provider to software developer, include skills that can be done, or at least supplemented, by generative artificial intelligence (GenAI), according to a recent report.

GenAI is artificial intelligence that can generate high-quality content based on the input data used to train it.

“AI is likely to touch every part of every job to some degree,” says Cory Stahle, an economist with Indeed.com, which released the report.

The report finds that almost one in five jobs (19.7%) — like IT operations, mathematics and information design — faces the highest risk of being affected by AI because at least 80% of the job skills those positions require can be done reasonably well by GenAI.

But that doesn’t mean that those jobs will eventually be lost to robots.

“It’s important to recognize that, in general, these technologies don’t affect entire occupations. It actually is very rare that a robot will show up, sit in somebody’s seat to do everything that someone does at their job,” says Michael Chui of the McKinsey Global Institute (MGI), who researches the impact of technology and innovation on business, the economy and society.

Indeed.com researchers analyzed more than 55 million job postings and found that GenAI can perform 50% to almost 80% of the skills required in 45.7% of those job listings. In 34.6% of jobs listed, GenAI can handle less than 50% of the skills.

Jobs that require manual skills or a personal touch, such as nursing and veterinary care, are the least likely to be hard hit by AI, the report says.

In the past, technological advances have mostly affected manual labor. However, GenAI is expected to have the most effect on so-called knowledge workers, generally defined as people who create knowledge or think for a living.

But, for now, AI does not appear poised to steal anyone’s job.

“There are very few jobs that AI can do completely. Even in jobs where AI can do many of the skills, there are still aspects of those jobs that AI cannot do,” Stahle says.

Rather than replace workers, researchers expect GenAI to enhance the work people already do, making them more efficient.

“This is something that, in many ways, we believe is going to unlock human potential and productivity for many workers across many different sectors of the economy,” Stahle says.

“There are a number of things that can happen,” Chui adds. “One is, we simply do more of something we were already doing, and so imagine if you’re a university professor or a teacher, and the grading can be done by machine rather than you. You can take those hours and, instead of grading, you can actually start tutoring your students, spending more time with your students, improving their performance, helping them learn.”

American workers need to begin using the new technology if they hope to remain competitive, according to Chui.

“Workers who are best able to use these technologies will be the most competitive workers in the workforce,” he says. “It was true before, but it’s more true than ever, that we’re all going to have to be lifetime learners.”

A survey developed by Chui finds that almost 80% of workers have experimented with AI tools.

“One of the great powers of these generative AI tools, so far, is they’ve been designed in such a way to make it easy for really anybody to use these types of tools,” Stahle says. “I really believe that people should be looking to embrace these tools and find ways to incorporate them into the work that they’re already interested in doing.”

Ultimately, could one of the unexpected benefits of AI be more efficient employees who work less?

“In general, Americans work a lot,” Chui says. “Maybe we don’t have to work so long. Maybe we have a four-day work week … and so you could give that time back to the worker.”

US Envoy Focuses on Cyberscams During Cambodia Visit 

Cindy Dyer, the U.S. ambassador-at-large for monitoring and combating trafficking, is planning to push Cambodia’s new government to ramp up its efforts to crack down on cyberscam operations that trap many trafficking victims in slavelike conditions.

A recently completed visit to Phnom Penh by Dyer “will serve as an opportunity for information sharing and coordination on anti-trafficking efforts,” the State Department said last week in a release.

Dyer met with a range of officials “with the objective of building a relationship with the new government for future coordination and advocating for progress in the most critical areas, including increased investigations and prosecutions of cyberscam operations,” said the November 15 release.

Cambodia’s role as host of cybercriminals has been in an international spotlight. The U.N. High Commissioner for Human Rights (UNHCHR) released a report this summer estimating that the industry has victimized 100,000 people in Cambodia.

Lured by promise of jobs

Operators of these scamming networks recruit unwitting workers from across Asia, often with the promise of well-paying tech jobs, and then force them to attempt to scam victims online while living in slavelike conditions, according to the report.

Countries including Indonesia, Taiwan and China have urged countries like Cambodia and Laos to crack down on the industry, while warning their own citizens of the dangers in traveling to these countries, according to the UNHCHR report.

The U.S. State Department’s annual report on global human trafficking, released in June, placed Cambodia in Tier 3, meaning the government has made insufficient efforts to address human trafficking and does not meet the minimum standards.

During her two-day visit to Cambodia that began November 15, Dyer met with officials from the ministries of justice, labor and social affairs, as well as representatives of the National Police and the National Committee for Counter Trafficking (NCCT) within the Ministry of Interior, according to an email from the U.S. Embassy in Phnom Penh. Dyer also held discussions with civil society groups working on combating human trafficking.

The discussions focused “on Cambodia’s efforts to protect trafficking victims, including providing protection assistance services for victims of trafficking and vulnerable migrants, capacity building for service providers and government officials to improve victim identification and referral, and addressing emerging trends in forced criminality,” the State Department release said.

More training urged

Am Sam Ath, operations director at the Cambodian rights group Licadho, told VOA Khmer that Dyer’s visit highlighted the need for Cambodia to tackle human trafficking and online scams.

“We see that the United States … ranks Cambodia third in the blacklist of human trafficking. It also has a lot of impact on our country, and if Cambodia does not make an effort further in the prevention of human trafficking or online scams, the ranking cannot be improved,” he said by telephone from the group’s Phnom Penh office.

He called on the Cambodian government to strengthen the capacity of officials and authorities to crack down on online crime.

“This crime problem is technologically modern, so the authorities involved in it have to get more training to keep up with the situation, as well as the timing of the crime,” Am Sam Ath added.

National Police spokesperson Chhay Kim Khoeun and Justice Ministry spokesperson Chin Malin declined to comment on Dyer’s visit, referring questions to Chou Bun Eng, permanent deputy chairman of the National Committee for Counter Trafficking. VOA Khmer called Chou Bun Eng, but she did not respond to a request for comment.

U.S. Embassy spokesperson Katherine Diop told VOA Khmer that Dyer’s visit to Cambodia was part of a U.S. effort across the world to encourage governments to take responsibility for preventing human trafficking and protecting victims.

“The United States stands with the Cambodian people to identify, support and seek justice for human trafficking victims,” she wrote in an email.

The UNHCHR report released in late August said the online scams were occurring in five countries in Southeast Asia: Cambodia, Thailand, Laos, Myanmar and the Philippines.

“People who have been trafficked into online forced criminality face threats to their right to life, liberty and security of the person,” said the U.N. report. “They are subject to torture and cruel, inhuman and degrading treatment or punishment, arbitrary detention, sexual violence, forced labor and other forms of labor exploitation as well as a range of other human rights violations and abuses.”

Cambodia first acknowledged the issue last year when Interior Minister Sar Kheng said in August that officials were being deployed across the country to check hotels, casinos and other establishments for potential trafficking victims.

The government has since announced sporadic operations to free victims and arrest traffickers. However, experts recently told VOA Khmer that these efforts have not noticeably curbed the illegal operations or caught ringleaders of the trafficking networks.

Altman Back as OpenAI CEO Days After Being Fired

The ousted leader of ChatGPT-maker OpenAI is returning to the company that fired him late last week, culminating a days-long power struggle that shocked the tech industry and brought attention to the conflicts around how to safely build artificial intelligence.

San Francisco-based OpenAI said in a statement late Tuesday, “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”

The board, which replaces the one that fired Altman on Friday, will be led by former Salesforce co-CEO Bret Taylor, who also chaired Twitter’s board before its takeover by Elon Musk last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.

OpenAI’s previous board of directors, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.

The chaos also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it becomes more advanced.

Microsoft, which has invested billions of dollars in OpenAI and has rights to its current technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal.

That emboldened a threatened exodus of nearly all of the startup’s 770 employees who signed a letter calling for the board’s resignation and Altman’s return.

One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.

Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant. Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was still open to the possibility of Altman returning to OpenAI, so long as the startup’s governance problems are solved.

“We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”

In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership with (Microsoft).”

Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business but one still run by its nonprofit board of directors. It’s not clear yet if the board’s structure will change with its newly appointed members.

“We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”

Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, will also have a key role to play in ensuring the group “continues to thrive and build on its mission.”

Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.

“Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.

“It’s been a long night for the team and we’re hungry. How many 16-inch pizzas should I order for 778 people?” the person asks, using the number of people who work at OpenAI. ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.

As for OpenAI’s short-lived interim CEO Emmett Shear, the second interim CEO in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after ~72 very intense hours of work.”

“Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”

Largest Crypto Exchange Fined $4 Billion; CEO Pleads Guilty to Allowing Money Laundering

The U.S. government dealt a massive blow to Binance, the world’s largest cryptocurrency exchange, which agreed to pay a roughly $4 billion settlement Tuesday as its founder and CEO Changpeng Zhao pleaded guilty to a felony related to his failure to prevent money laundering on the platform. 

Zhao stepped down as the company’s chief executive, and Binance admitted to violations of the Bank Secrecy Act and apparent violations of sanctions programs, including its failure to implement reporting programs for suspicious transactions. 

“Using new technology to break the law does not make you a disruptor, it makes you a criminal,” said U.S. Attorney General Merrick Garland, who called the settlement one of the largest corporate penalties in the nation’s history. 

As part of the settlement agreement, the U.S. Treasury said Binance will be subject to five years of monitoring and “significant compliance undertakings, including to ensure Binance’s complete exit from the United States.” Binance is a Cayman Islands limited liability company. 

The cryptocurrency industry has been marred by scandals and market meltdowns. 

Rival of FTX founder

Zhao was perhaps best known as the chief rival to Sam Bankman-Fried, the 31-year-old founder of FTX, which was the second-largest crypto exchange before it collapsed last November. Bankman-Fried was convicted earlier this month of fraud for stealing at least $10 billion from customers and investors. 

Zhao, meanwhile, pleaded guilty in a federal court in Seattle on Tuesday to one count of failure to maintain an effective anti-money-laundering program. 

Magistrate Judge Brian A. Tsuchida questioned Zhao to make sure he understood the plea agreement, saying at one point: “You knew you didn’t have controls in place.” 

“Yes, your honor,” he replied. 

Binance wrote in a statement that it made “misguided decisions” as it quickly grew to become the world’s biggest crypto exchange, and said the settlement acknowledges its “responsibility for historical, criminal compliance violations.” 

U.S. Treasury Secretary Janet Yellen said Binance processed transitions by illicit actors, “supporting activities from child sexual abuse to illegal narcotics, to terrorism, across more than 100,000 transactions.” 

Binance did not file a single suspicious activity report on those transactions, Yellen said, and the company allowed more than 1.5 million virtual currency trades that violated U.S. sanctions, including ones involving Hamas’ al-Qassam Brigades, al-Qaida and other criminals. 

The judge set Zhao’s sentencing for February 23, however it’s likely to be delayed. He faces a possible guideline sentence range of up to 18 months. 

One of his attorneys, Mark Bartlett, noted that Zhao had been aware of the investigation since December 2020, and surrendered willingly even though the United Arab Emirates — where Zhao lives — has no extradition treaty with the U.S. 

“He decided to come here and face the consequences,” Bartlett said. “He’s sitting here. He pled guilty.” 

Zhao, who is married and has young children in the UAE, promised he would return to the U.S. for sentencing if allowed to stay there in the meantime. 

“I want to take responsibility and close this chapter in my life,” Zhao said. “I want to come back. Otherwise I wouldn’t be here today.” 

Company sent investor assets to third party

Zhao previously faced allegations of diverting customer funds, concealing the fact that the company was commingling billions of dollars in investor assets and sending them to a third party that Zhao also owned. 

Over the summer, Binance was accused of operating as an unregistered securities exchange and violating a slew of U.S. securities laws in a lawsuit from regulators. That case was similar to practices uncovered after the collapse of FTX. 

Zhao and Bankman-Fried were originally friendly competitors in the industry, with Binance investing in FTX when Bankman-Fried launched the exchange in 2019. However, the relationship between the two deteriorated, culminating in Zhao announcing he was selling all of his cryptocurrency investments in FTX in early November 2022. FTX filed for bankruptcy a week later. 

At this trial and in later public statements, Bankman-Fried tried cast blame on Binance and Zhao for allegedly orchestrating a run on the bank at FTX. 

A jury found Bankman-Fried guilty of wire fraud and several other charges. He is expected to be sentenced in March, where he could face decades in prison. 

Solar Panels Over Canals in Gila River Indian Community Will Help Save Water

In a move that may soon be replicated elsewhere, the Gila River Indian Community recently signed an agreement with the U.S. Army Corps of Engineers to put solar panels over a stretch of irrigation canal on its land south of Phoenix.

It will be the first project of its kind in the United States to break ground, according to the tribe’s press release.

“This was a historic moment here for the community but also for the region and across Indian Country,” said Gila River Indian Community Governor Stephen Roe Lewis in a video published on X, formerly known as Twitter.

The first phase, set to be completed in 2025, will cover 1,000 feet of canal and generate one megawatt of electricity that the tribe will use to irrigate crops, including feed for livestock, cotton and grains.

The idea is simple: install solar panels over canals in sunny, water-scarce regions where they reduce evaporation and make renewable electricity.

“We’re proud to be leaders in water conservation, and this project is going to do just that,” Lewis said, noting the significance of a Native, sovereign, tribal nation leading on the technology.

A study by the University of California, Merced estimated that 63 billion gallons of water could be saved annually by covering California’s 4,000 miles of canals. More than 100 climate advocacy groups are advocating for just that.

Researchers believe that much of the installed solar canopies would additionally generate a significant amount of electricity.

UC Merced wants to hone its initial estimate and should soon have the chance. Not far away in California’s Central Valley, the Turlock Irrigation District and partner Solar AquaGrid plan to construct 1.6 miles (2.6 kilometers) of solar canopies over its canals beginning this spring and researchers will study the benefits.

Neither the Gila River Indian Community nor the Turlock Irrigation District are the first to implement this technology globally. Indian engineering firm Sun Edison inaugurated the first solar-covered canal in 2012 on one of the largest irrigation projects in the world in Gujarat state. Despite ambitious plans to cover 11,800 miles (19,000 kilometers) of canals, only a handful of small projects ever went up, and the engineering firm filed for bankruptcy.

High capital costs, clunky design and maintenance challenges were obstacles for widespread adoption, experts say.

But severe, prolonged drought in the western U.S. has centered water as a key political issue, heightening interest in technologies like cloud seeding and solar-covered canals as water managers grasp at any solution that might buoy reserves, even ones that haven’t been widely tested, or tested at all.

Still, the project is an important indicator of the tribe’s commitment to water conservation, said Heather Tanana, a visiting law professor at the University of California, Irvine and citizen of the Navajo Nation. Tribes hold the most senior water rights on the Colorado River, though many are still settling those rights in court.

“There’s so much fear about the tribes asserting their rights and if they do so, it’ll pull from someone else’s rights,” she said. The tribe leaving water in Lake Mead and putting federal dollars toward projects like solar canopies is “a great example to show that fear is unwarranted.”

The federal government has made record funding available for water-saving projects, including a $233 million pact with the Gila River Indian Community to conserve about two feet of water in Lake Mead, the massive and severely depleted reservoir on the Colorado River. Phase one of the solar canal project will cost $6.7 million and the Bureau of Reclamation provided $517,000 for the design.

Microsoft Hires Sam Altman as OpenAI’s new CEO Vows to Investigate Firing

Microsoft snapped up Sam Altman and another architect of OpenAI for a new venture after their sudden departures shocked the artificial intelligence world, leaving the newly installed CEO of the ChatGPT maker to paper over tensions by vowing to investigate Altman’s firing.

The developments Monday come after a weekend of drama and speculation about how the power dynamics would shake out at OpenAI, whose chatbot kicked off the generative AI era by producing human-like text, images, video and music.

It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.

Despite the rift between the key players behind ChatGPT and the company they helped build, both Shear and Microsoft Chairman and CEO Satya Nadella said they are committed to their partnership.

Microsoft invested billions of dollars in the startup and helped provide the computing power to run its AI systems. Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the former executives of OpenAI and looked “forward to getting to know” Shear and the rest of the management team.

In a reply on X, Altman said “the mission continues,” while Brockman posted, “We are going to build something new & it will be incredible.”

OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.

In an X post Monday, Shear said he would hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

“It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.

He said he also plans in the next month to “reform the management and leadership team in light of recent departures into an effective force” and speak with employees, investors and customers.

After that, Shear said he would “drive changes in the organization,” including “significant governance changes if necessary.” He noted that the reason behind the board removing Altman was not a “specific disagreement on safety,” a likely reference to the debates that have swirled around OpenAI’s mission to safely build AI that is “generally smarter than humans.”

OpenAI last week declined to answer questions on what Altman’s alleged lack of candor was about. Its statement said his behavior was hindering the board’s ability to exercise its responsibilities. But a key driver of Friday’s shakeup, OpenAI’s co-founder, chief scientist and board member Ilya Sutskever, posted regrets on the situation to X on Monday: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

OpenAI didn’t reply to emails Monday seeking comment. A Microsoft representative said the company would not be commenting beyond its CEO’s statement.

After Altman was pushed out Friday, he stirred speculation that he might be coming back into the fold in a series of tweets. He posted a photo of himself with an OpenAI guest pass on Sunday, saying this is “first and last time i ever wear one of these.”

Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman, who quit after Altman was fired, and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.

It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among several employees on Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.

Shear said he stepped down as Twitch CEO because of the birth of his now-9-month-old son but “took this job because I believe that OpenAI is one of the most important companies currently in existence.”

His beliefs on the future of AI came up on a podcast in June. Shear said he’s generally an optimist about technology but has serious concerns about the path of artificial intelligence toward building something “a lot smarter than us” that sets itself on a goal that endangers humans.

“If there is a world where we survive … where we build an AI that’s smarter than humans and survive it, it’s going to be because we built smaller AIs than that, and we actually had as many smart people as we can working on that, and taking the problem seriously,” Shear said in June.

It’s an issue that Altman consistently faced since he helped catapult ChatGPT to global fame. In the past year, he has become Silicon Valley’s most sought-after voice on the promise and potential dangers of artificial intelligence.

He went on a world tour to meet with government officials earlier this year, drawing big crowds at public events as he discussed both the risks of AI and attempts to regulate the emerging technology.

Altman posted Friday on X that “i loved my time at openai” and later called his ouster a “weird experience.”

“If Microsoft lost Altman he could have gone to Amazon, Google, Apple, or a host of other tech companies craving to get the face of AI globally in their doors,” Daniel Ives, an analyst with Wedbush Securities, said in a research note.

Microsoft is now in an even stronger position on AI, Ives said. Its shares rose nearly 2% before the opening bell and were nearing an all-time high Monday.

Space Tracking Helps Australia Monitor, Manage Feral Buffalo Herds

Indigenous rangers in northern Australia have started managing herds of feral animals from space. In the largest project of its kind in Australia, the so-called Space Cows project involves tagging and then tracking a thousand wild cattle and buffalo via satellite.

Water buffalo were imported into Australia’s Northern Territory in the 19th century as working animals and meat for remote settlements. When those communities were abandoned, the animals were released into the wild.

Their numbers have grown, and feral buffaloes can cause huge environmental damage. In wetlands, they move along pathways called swim channels, which have caused salt water to flow into freshwater plains. This has led to the degradation and loss of large areas of paperbark forest and natural waterholes, as well as spreading weeds.  

Under the so-called Space Cows program, feral cattle and buffaloes are being rounded up, often by helicopter, tied to trees, and fitted with solar-powered tags that can be tracked by satellite.

Scientists say the real-time data will be critical to controlling and predicting the movement of the feral herds, which are notorious for trashing the landscape.

Most feral buffalo are found on Aboriginal land, and researchers are working closely with Indigenous rangers. They carry out sporadic buffalo culls, and there are hopes that First Nations communities can benefit economically from well-managed feral herds.

The technology will allow Indigenous rangers to predict where cattle and buffalo are going and cull them or fence off important cultural or environmental sites.  The data will help rangers stop the animals trampling sacred ceremonial areas and destroying culturally significant waterways.  Scientists say the satellite information will allow them to predict when herds might head to certain waterways in warm weather allowing rangers to intervene.

In recent years, thousands of wild buffalo have been exported from Australia to Southeast Asia.

Andrew Hoskins is a biologist at the CSIRO, the Commonwealth Scientific and Industrial Research Organization, Australia’s national science agency.

He told the Australian Broadcasting Corp’s AM Program this is the first time feral animals have been monitored from space.

“This really, you know, large scale tracking project, (is) probably the largest from a wildlife or a buffalo tracking perspective that has ever been done.  The novel part, I suppose, is then that links through to a space-based satellite system,” said Hoskins.

Australia has had an often-disastrous experience with bringing in animals from overseas since European colonization in the later 1800s.  It is not just buffaloes that cause immense environmental damage.   

Cane toads — brought to the country in a failed attempt to control pests on sugar cane plantations in the 1930s — are prolific breeders and feeders that can dramatically attack native insects, frogs, reptiles and other small creatures. Their skin contains toxic venom that can also kill native predators.

Feral cats kill millions of birds in Australia each year, while foxes, pigs and camels cause widespread ecological damage across Australia.  

Yellow crazy ants are one of the world’s worst invasive species.  Authorities believe they arrived in Australia accidentally through shipping ports.  They have been recorded in Queensland and New South Wales states as well as the Northern Territory.  The ants are a highly aggressive species and spit a formic acid, which burns the skin of their prey, including small mammals, turtle hatchlings and bird chicks.

Artists Push for US Copyright Reforms on AI, But Tech Industry Says Not So Fast

Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

“Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

“We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

What’s at stake?

Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

That’s one question the Copyright Office has put to the public.

A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by December 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

What are artists saying?

Addressing the “Ladies and Gentlemen of the US Copyright Office,” the Family Ties actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (Poker Face) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s written tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

While most commenters were individuals, their concerns were echoed by big music publishers — Universal Music Group called the way AI is trained “ravenous and poorly controlled” — as well as author groups and news organizations including The New York Times and The Associated Press.

Is it fair use?

What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

“The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

Perlmutter said this is what the Copyright Office is trying to help sort out.

“Certainly, this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

Advertisers Flee Elon Musk’s X Amid Concerns of Antisemitism Backlash

Advertisers are fleeing social media platform X over concerns about their ads showing up next to pro-Nazi content and hate speech on the site in general, with billionaire owner Elon Musk inflaming tensions with his own posts endorsing an antisemitic conspiracy theory.

IBM said this week that it stopped advertising on X after a report said its ads were appearing alongside material praising Nazis — a fresh setback as the platform, formerly known as Twitter, tries to win back big brands and their ad dollars, X’s main source of revenue.

The liberal advocacy group Media Matters said in a report Thursday that ads from Apple, Oracle, NBCUniversal’s Bravo network and Comcast also were placed next to antisemitic material on X.

“IBM has zero tolerance for hate speech and discrimination and we have immediately suspended all advertising on X while we investigate this entirely unacceptable situation,” the company said in a statement.

Apple, Oracle, NBCUniversal and Comcast didn’t respond immediately to requests seeking comment on their next steps.

The European Union’s executive branch said separately Friday it is pausing advertising on X and other social media platforms, in part because of a surge in hate speech. Later in the day, Disney, Lionsgate and Paramount Global also said they were suspending or pausing advertising on X.

Musk sparked outcry this week with his own tweets responding to a user who accused Jews of hating white people and professing indifference to antisemitism. “You have said the actual truth,” Musk tweeted in a reply Wednesday.

Musk has faced accusations of tolerating antisemitic messages on the platform since purchasing it last year, and the content on X has gained increased scrutiny since the war between Israel and Hamas began.

“We condemn this abhorrent promotion of antisemitic and racist hate in the strongest terms, which runs against our core values as Americans,” White House spokesperson Andrew Bates said Friday in response to Musk’s tweet.

X CEO Linda Yaccarino said X’s “point of view has always been very clear that discrimination by everyone should STOP across the board.”

“I think that’s something we can and should all agree on,” she tweeted Thursday.

Yaccarino, a former NBCUniversal executive, was hired by Musk to rebuild ties with advertisers who fled after he took over, concerned that his easing of content restrictions was allowing hateful and toxic speech to flourish and that would harm their brands.

“When it comes to this platform — X has also been extremely clear about our efforts to combat antisemitism and discrimination. There’s no place for it anywhere in the world — it’s ugly and wrong. Full stop,” Yaccarino said.

Media Matters and Anti-Defamation League

The accounts that Media Matters found posting antisemitic material will no longer be monetizable and the specific posts will be labeled “sensitive media,” according to a statement from X. Still, Musk decried Media Matters as “an evil organization.”

The head of the Anti-Defamation League also hit back at Musk’s tweets this week, in the latest clash between the prominent Jewish civil-rights organization and the billionaire businessman.

“At a time when antisemitism is exploding in America and surging around the world, it is indisputably dangerous to use one’s influence to validate and promote antisemitic theories,” ADL CEO Jonathan Greenblatt said on X.

Musk also tweeted this week that he was “deeply offended by ADL’s messaging and any other groups who push de facto anti-white racism or anti-Asian racism or racism of any kind.”

The group has previously accused Musk of allowing antisemitism and hate speech to spread on the platform and amplifying the messages of neo-Nazis and white supremacists who want to ban the ADL.

European Commission steps back

The European Commission, meanwhile, said it’s putting all its social media ad efforts on hold because of an “alarming increase in disinformation and hate speech” on platforms in recent weeks.

The commission, the 27-nation EU’s executive arm, said it is advising its services to “refrain from advertising at this stage on social media platforms where such content is present,” adding that the freeze doesn’t affect its official accounts on X.

The EU has taken a tough stance with new rules to clean up social media platforms, and last month it made a formal request to X for information about its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war.

TikTok troubles

X isn’t alone in dealing with problematic content since the conflict.

On Thursday, TikTok removed the hashtag #lettertoamerica after users on the app posted sympathetic videos about Osama bin Laden’s 2002 letter justifying the terrorist attacks against Americans on 9/11 and criticizing U.S. support for Israel. The Guardian news outlet, which published the transcript of the letter that was being shared, took it down and replaced it with a statement that directed readers to a news article from 2002 that it said provided more context.

The videos garnered widespread attention among X users critical of TikTok, which is owned by Beijing-based ByteDance. TikTok said the letter was not a trend on its platform and blamed an X post by journalist Yashar Ali and media coverage for drawing more engagement to the hashtag.

The short-form video app has faced criticism from Republicans and others who say the platform has been failing to protect Jewish users from harassment and pushing pro-Palestinian content to viewers.

TikTok has aggressively pushed back, saying it’s been taking down antisemitic content and doesn’t manipulate its algorithm to take sides. 

Second SpaceX Starship Launch Presumed Failed Minutes After Reaching Space

SpaceX’s uncrewed spacecraft Starship, developed to carry astronauts to the moon and beyond, was presumed to have failed in space minutes after lifting off on Saturday in a second test after its first attempt to reach space ended in an explosion.

The two-stage rocket ship blasted off from the Elon Musk-owned company’s Starbase launch site near Boca Chica, Texas, soaring roughly 90 kilometers (55 miles) above ground on a planned 90-minute flight into space.

But the rocket’s Super Heavy first stage booster, though it appeared to achieve a crucial maneuver to separate with its core stage, exploded over the Gulf of Mexico shortly after detaching.

Meanwhile, the core Starship booster carried further toward space, but roughly 10 minutes into the flight a company broadcaster said that SpaceX mission control suddenly lost contact with the vehicle.

“We have lost the data from the second stage. … We think we may have lost the second stage,” SpaceX’s livestream host John Insprucker said.

The launch was the second attempt to fly Starship mounted atop its towering Super Heavy rocket booster, following an April attempt that ended in failure about four minutes after liftoff.

A live SpaceX webcast of Saturday’s launch showed the rocket ship rising from the launch tower into the morning sky as the Super Heavy’s cluster of powerful Raptor engines thundered to life.

The test flight’s principal objective was to get Starship off the ground and into space just shy of Earth’s orbit. Doing so would have marked a key step toward achieving SpaceX’s goal of producing a large, multipurpose spacecraft capable of sending people and cargo back to the moon later this decade for NASA, and ultimately to Mars.

Musk — SpaceX’s founder, chief executive and chief engineer — also sees Starship as eventually replacing the company’s workhorse Falcon 9 rocket as the centerpiece of its launch business, which already takes most of the world’s satellites and other commercial payloads into space.

NASA, SpaceX’s primary customer, has a considerable stake in the success of Starship, which the U.S. space agency is counting on to play a central role in its human spaceflight program, Artemis, successor to the Apollo missions of more than a half century ago that put astronauts on the moon for the first time.

The mission’s objective was to get Starship off the ground in Texas and into space just shy of reaching orbit, then plunge through Earth’s atmosphere for a splashdown off Hawaii’s coast. The launch had been scheduled for Friday but was pushed back by a day for a last-minute swap of flight-control hardware.

During its April 20 test flight, the spacecraft blew itself to bits less than four minutes into a planned 90-minute flight that went awry from the start. SpaceX has acknowledged that some of the Super Heavy’s 33 Raptor engines malfunctioned on ascent, and that the lower-stage booster rocket failed to separate as designed from the upper-stage Starship before the flight was terminated.