Technology

Father of Cellphone Sees Dark Side but Also Hope in New Tech

Holding the bulky brick cellphone he’s credited with inventing 50 years ago, Martin Cooper thinks about the future.

Little did he know when he made the first call on a New York City street from a thick gray prototype that our world — and our information — would come to be encapsulated on a sleek glass sheath where we search, connect, like and buy.

He’s optimistic that future advances in mobile technology can transform human lives but is also worried about risks smartphones pose to privacy and young people.

“My most negative opinion is we don’t have any privacy anymore because everything about us is now recorded someplace and accessible to somebody who has enough intense desire to get it,” the 94-year-old told The Associated Press at MWC, or Mobile World Congress, the world’s biggest wireless trade show where he was getting a lifetime award this week in Barcelona.

Besides worrying about the erosion of privacy, Cooper also acknowledged the negative side effects that come with smartphones and social media, such as internet addiction and making it easy for children to access harmful content.

But Cooper, describing himself as a dreamer and an optimist, said he’s hopeful that advances in cellphone technology have the potential to revolutionize areas like education and health care.

“Between the cellphone and medical technology and the Internet, we are going to conquer disease,” he said.

It’s a long way from where he started.

Cooper made the first public call from a handheld portable telephone on a Manhattan street on April 3, 1973, using a prototype device that his team at Motorola had started designing only five months earlier.

Cooper used the Dyna-TAC phone to famously call his rival at Bell Labs, owned by AT&T. It was, literally, the world’s first brick phone, weighing 2.5 pounds and measuring 11 inches. Cooper spent the best part of the next decade working to bring a commercial version of the device to market.

The call help kick-start the cellphone revolution, but looking back on that moment 50 years later, “we had no way of knowing this was the historic moment,” Cooper said.

“The only thing that I was worried about: ‘Is this thing going to work?’ And it did,” he said Monday.

While blazing a trial for the wireless communications industry, he hoped that cellphone technology was just getting started.

Cooper said he’s “not crazy” about the shape of modern smartphones, blocks of plastic, metal and glass. He thinks phones will evolve so that they will be “distributed on your body,” perhaps as sensors “measuring your health at all times.”

Batteries could even be replaced by human energy.

“The human body is the charging station, right? You ingest food, you create energy. Why not have this receiver for your ear embedded under your skin, powered by your body?” he imagined.

Cooper also acknowledged there’s a dark side to advances — the risk to privacy and to children.

Regulators in Europe, where there are strict data privacy rules, and elsewhere are concerned about apps and digital ads that track user activity, allowing tech and digital ad companies to build up rich profiles of users.

“It’s going to get resolved, but not easily,” Cooper said. “There are people now that can justify measuring where you are, where you’re making your phone calls, who you’re calling, what you access on the Internet.”

Smartphone use by children is another area that needs limits, Cooper said. One idea is to have “various internets curated for different audiences.”

Five-year-olds should be able to use the internet to help them learn, but “we don’t want them to have access to pornography and to things that they don’t understand,” he said.

The inspiration for Cooper’s cellphone idea was not the personal communicators on Star Trek, but comic strip detective Dick Tracy’s radio wristwatch. As for his own phone use, Cooper says he checks email and does online searches for information to settle dinner table arguments.

However, “there are many things that I have not yet learned,” he said. “I still don’t know what TikTok is.”

EU Defends Talks on Big Tech Helping Fund Networks

Europe’s existing telecom networks aren’t up to the job of handling surging amounts of internet data traffic, a top European Union official said Monday, as he defended a consultation on whether Big Tech companies should help pay for upgrades.

The telecom industry needs to reconsider its business models as it undergoes a “radical shift” fueled by a new wave of innovation such as immersive, data-hungry technologies like the metaverse, Thierry Breton, the European Commission’s official in charge of digital policy, said at a major industry expo in Barcelona called MWC, or Mobile World Congress.

Breton’s remarks came days after he announced a consultation on whether digital giants should help contribute to the billions needed to build the 27-nation bloc’s future communications infrastructure, including next-generation 5G wireless and fiber-optic cable connections, to keep up with surging demand for digital data.

“Yes, of course, we will need to find a financing model for the huge investments needed,” Breton said in a copy of a keynote speech at the MWC conference.

Telecommunications companies complain they have had to foot the substantial costs of building and operating network infrastructure only for big digital streaming platforms like Netflix and Facebook to benefit from the surging consumer demand for online services.

“The consultation has been described by many as the battle over fair share between Big Telco and Big Tech,” Breton said. “A binary choice between those who provide networks today and those who feed them with the traffic. That is not how I see things.”

Big tech companies say consumers could suffer because they’d end up paying twice, with extra fees for their online subscriptions.

Breton denied that the consultation was an attack on Big Tech or that he was siding with telecom companies.

“I’m proposing a new approach,” he later told reporters. Topics up for discussion include how much investment is needed and whether regulations need to be changed, he said.

“We will have zero taboo,” Breton said, referring to the conference’s approach that no topic is off limits. “Do we need to adapt it? Do we need to discuss who should pay for what? This is exactly what is the consultation today.”

US Cybersecurity Official Calls Out Tech Companies for ‘Unsafe’ Software

A top U.S. cybersecurity official launched a warning shot at major technology companies, accusing them of “normalizing” the release of flawed and unsafe products while allowing the blame for safety issues, security breaches and cyberattacks to fall on their customers.

Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly called Monday for new rules and legislation to hold technology and software companies accountable for selling products that she says are released despite known vulnerabilities.

While massive hacking campaigns by China and other adversaries, including Russia, Iran and North Korea, are a major problem, “cyber intrusions are a symptom rather than a cause,” Easterly told an audience at Carnegie Mellon University in Pittsburgh.

“The cause, simply put, is unsafe technology products,” she said. “The risk introduced to all of us by unsafe technology is frankly much more dangerous and pervasive than the [Chinese] spy balloon, but somehow we’ve allowed ourselves to accept it.”

The push for regulation and legislation is not entirely new. Both Easterly and former National Cyber Director Chris Inglis, who stepped down earlier this month, warned during their confirmation hearings more than a year and a half ago that government action could be required if private companies refused to do more.

“Enlightened self-interest, that’s apparently not working. … Market forces, that’s apparently not working,” Inglis said at the time. 

Now, with China running a “massive and sophisticated” hacking program, and threats from other countries and from cyber criminals constantly growing, “we have to make a fundamental shift,” Easterly said.

CISA is in the process of laying out a set of core principles, Easterly said. Some of the most critical are to make sure that the burden for safety is never left solely to tech and software customers, that manufacturers be transparent about problems and how to fix them, and that products be “secure by design and secure by default.”

“Technology must be purposefully designed and developed and built and tested to significantly reduce the number of exploitable flaws before they’re introduced into the market for broad use,” Easterly said. 

“Ultimately such a transition to secure-by-design and secure-by-default products will help organizations and technology providers, because it’ll mean less time fixing problems, more time focusing on innovation and growth, and importantly it’ll make life much harder for our adversaries.”

Easterly said the U.S. government is already using its purchasing power to help make the change, requiring companies that want government contracts to meet higher security requirements.

She also praised a handful of companies, including Apple, Google, Mozilla and Amazon Web Services for moving to a more secure model but called efforts by others, including Twitter and Microsoft when it comes to the use of multifactor authentication, “disappointing.” 

VOA contacted Microsoft and Twitter for their reaction to Easterly’s specific criticism. Neither had provided a response as of the time of publication.

“We’ve normalized the fact that technology products are released to market with dozens, hundreds or thousands of defects when such poor construction would be unacceptable in any other critical field,” Easterly said, adding other industries have found ways to change.

“For the first half of the 20th century, conventional wisdom held that car accidents were solely the fault of bad drivers,” she said. “Cars today are designed to be as safe as possible. … Nobody would think of purchasing a car today that didn’t have seatbelts or airbags included as standard features, and no one would accept paying extra to have these basic security features installed.” 

Twitter Lays Off 10% of Current Workforce – NYT

Twitter Inc has laid off at least 200 employees, or about 10% of its workforce, the New York Times reported late on Sunday, in its latest round of job cuts since Elon Musk took over the micro-blogging site last October. 

The layoffs on Saturday night impacted product managers, data scientists and engineers who worked on machine learning and site reliability, which helps keep Twitter’s various features online, the NYT report said, citing people familiar with the matter. 

Twitter did not immediately respond to a Reuters request for comment. 

The company has a headcount of about 2,300 active employees, according to Musk last month. 

The latest job cuts follow a mass layoff in early November, when Twitter laid off about 3,700 employees in a cost-cutting measure by Musk, who had acquired the company for $44 billion. 

Musk said in November that the service was experiencing a “massive drop in revenue” as advertisers pulled spending amid concerns about content moderation. 

Twitter recently started sharing revenue from advertisements with some of its content creators. 

Earlier in the day, The Information reported that the social media platform laid off dozens of employees on Saturday, aiming to offset a plunge in revenue. 

Launch of Space Station Crew Postponed

NASA and SpaceX postponed a planned Monday launch of a four-member crew to the International Space Station due to a ground systems issue. 

The decision came less than three minutes before the spacecraft was due to lift off from NASA’s Kennedy Space Center in Cape Canaveral, Florida. 

A backup launch date had already been set for early Tuesday. 

The four-person crew includes two Americans, one Russian and one astronaut from the United Arab Emirates. 

NASA said their planned six-month mission includes a range of scientific experiments including studying how materials burn in microgravity, collecting microbial samples from outside the space station and “tissue chip research on heart, brain, and cartilage functions.” 

Mexican States in Hot Competition Over Possible Tesla Plant

Mexico is undergoing a fevered competition among states to win a potential Tesla facility in jostling reminiscent of what happens among U.S. cities and states vying to win investments from tech companies.

Mexican governors have gone to extremes, like putting up billboards, creating special car lanes or creating mock-ups of Tesla ads for their states.

And there’s no guarantee Tesla will build a full-fledged factory. Nothing is announced, and the frenzy is based mainly on Mexican officials saying Tesla boss Elon Musk will have a phone call with Mexican President Andrés Manuel López Obrador.

The northern industrial state of Nuevo Leon seemed to have an early edge in the race.

It painted the Tesla logo on a lane at the Laredo-Colombia border crossing into Texas last summer and is erecting billboards in December in the state capital, Monterrey, that read “Welcome Tesla.”

The state governor’s influencer wife, Mariana Rodriguez, was even shown in leaked photos at a get-together with Musk.

However, López Obrador appeared to exclude the semi-desert state from consideration Monday, arguing he wouldn’t allow the typically high water use of factories to risk prompting shortages there.

That set off a competitive scramble among other Mexican states. The governors’ offers ranged from crafty proposals to near-comic ones.

“Veracruz is the only state with an excess of gas,” quipped Gov. Cuitláhuac García of the Gulf Coast state, before quickly adding “gas … for industrial use, for industrial use!”

A latecomer to the race, García had to try harder: He noted Veracruz was home to Mexico’s only nuclear power plant. And he claimed Veracruz had 30% of Mexico’s water, though the National Water Commission puts the state’s share at around 11%.

The governor of the western state of Michoacan wasn’t going to be left out. Gov. Alfredo Ramírez Bedolla quickly posted a mocked-up ad for a Tesla car standing next to a huge, car-sized avocado — Michoacan’s most recognizable product — with the slogan “Michoacan — The Best Choice for Tesla.”

“We have enough water,” Ramírez Bedolla said in a television interview he did between a round of meetings with auto industry figures and international business representatives.

Michoacan also has an intractable problem of drug cartel violence. But similar violence in neighboring Guanajuato state hasn’t stopped seven major international automakers from setting up plants there.

Nuevo Leon Gov. Samuel García had to think fast to avoid being shut out entirely.

García reached out to the western state of Jalisco, whose governor, Enrique Alfaro, belongs to the same small Citizen’s Movement party. Together, the two came up with an alliance Thursday that would allow trucks from Jalisco preferential use of Nuevo Leon’s border crossing, the same one where a “Tesla” lane appeared last year.

Jalisco has a healthy foreign tech sector, but most importantly, it has more water than Nuevo Leon.

López Obrador’s focus on water might be more about politics than about droughts, said Gabriela Siller, chief economist at Nuevo Leon-based Banco Base. She said the president appeared to be trying to steer Tesla investment to a state governed by his own Morena party, like Michoacan or Veracruz.

That could be a dangerous game, Siller said.

“Tesla could say it’s not somebody’s toy to be moved around anywhere, and it could decide not to come to Mexico,” she said.

There are doubts that whatever Musk eventually does announce will be an auto assembly plant. Foreign Relations Secretary Marcelo Ebrard said his understanding is that it won’t be a plant, but rather an ecosystem of suppliers.

Musk at times has floated the idea of building a $25,000 electric vehicle that would cost about $20,000 less than the current Model 3, now Tesla’s least-expensive car. Many automakers build lower-cost models in Mexico to save on labor costs and protect profit margins.

A Tesla investment could be part of “near shoring” by U.S. companies that once manufactured in China but now are leery of logistical and political problems there. That those companies will now turn to Mexico represents the Latin American country’s biggest foreign investment hope.

“The fight among states to attract investments from this nearshoring phenomenon is going to be tough, complicated,” Michoacan’s Ramírez Bedolla said.

As Ramírez Bedolla put it, “wherever Tesla sets up, it is going to be big news in Mexico.”

Mobile Tech Fair to Show Off New Phones, AI, Metaverse

The latest folding-screen smartphones, immersive metaverse experiences, AI-powered chatbot avatars and other eye-catching technology are set to wow visitors at the annual MWC wireless trade fair that kicks off Monday.

The four-day show, held in a vast Barcelona conference center, is the world’s biggest and most influential meeting for the mobile tech industry. The range of technology set to go on display illustrates how the show, also known as Mobile World Congress, has evolved from a forum for mobile phone standards into a showcase for new wireless tech.

Organizers are expecting as many as 80,000 visitors from as many as 200 countries and territories as the event resumes at full strength after several years of pandemic disruptions.

Here’s a look at what to expect:

Metaverse

There was a lot of buzz around the metaverse at last year’s MWC and at other recent tech fairs like last month’s CES in Las Vegas. Expect even more at this event.

Several companies are planning to show off their metaverse experiences that will allow users to connect with each other, attend events far away, or enter fantastical new online worlds.

Software company Amdocs will use virtual and augmented reality to give users a “metatour” of Dubai. Other tech and telecom companies promise metaverse demos to help with physical rehab, virtually try on clothes, or learn how to fix aircraft landing gear.

The metaverse’s popularity exploded after Facebook founder Mark Zuckerberg in late 2021 exalted it as the next big thing for the internet and his company. Lately, though, doubts have started to creep in.

“All the business models around the metaverse are a big question mark right now,” said John Strand, a veteran telecom industry consultant.

Artificial intelligence

AI has caught the tech world’s attention thanks to the dramatic advances in new tools like ChatGPT that can hold conversations and generate readable text. Expect artificial intelligence to be deployed as an “overused buzzword” at MWC, said Ben Wood, principal analyst at CCS Insight.

Companies are promising to show how they’re using AI to make home Wi-Fi networks more energy efficient or sniff out fakes.

Microsoft’s press representatives have hinted that they might have a demonstration of ChatGPT but haven’t provided any details. The company added AI chatbot technology to its Bing search engine but scrambled to make fixes after it responded with insults or wrong answers to some users who got early access.

Startups will demo their own AI-powered chat technology: D-ID will show off their eerie “digital human” avatars, while Botslovers says its service promises to “free humans from boring tasks.”

Not just smartphones

MWC hit its stride in the previous decade as the smartphone era boomed, with device makers competing for attention with glitzy product launches. Nowadays, smartphone innovation has hit a plateau and companies are increasingly debuting phones in other ways.

Attention at the show is focusing on potential uses for 5G, the next generation of ultrafast wireless technology that promises to unlock a wave of innovation beyond just smartphones, such as automated factories, driverless cars and smart cities.

“Mobile phones will still be a hot topic at MWC, but they’ve become a mature, iterative and almost boring category,” Wood said. “The only excitement will come from the slew of foldable designs and prototypes, but the real size of the market for these premium products remains unclear.”

Device launches will be dominated by lesser-known Chinese brands such as OnePlus, Xiaomi, ZTE and Honor looking to take market share from the market leaders, Apple and Samsung.

Chinese presence

Chinese technology giant Huawei will have a major presence at MWC, despite being blacklisted by the Western governments as part of a broader geopolitical battle between Washington and Beijing over technology and security.

Organizers say Huawei will have the biggest presence at the show among some 2,000 exhibitors. That’s even after the U.S. pushed allies to get their mobile phone companies to block or restrict Huawei’s networking equipment over concerns Beijing could induce the company to carry out cybersnooping or sabotage critical communications infrastructure.

Huawei, which has repeatedly denied those allegations, also has been squeezed by Western sanctions aimed at starving it of components like microchips.

Analysts say one message that Huawei could be sending with its oversized display is defiance to the West.

Google Tests Blocking News Content for Some Canadians

Google is blocking some Canadian users from viewing news content in what the company said is a test run of a potential response to a Canadian government’s online news bill.

Bill C-18, the Online News Act, would require digital giants such as Google and Meta, which owns Facebook, to negotiate deals that would compensate Canadian media companies for republishing their content on their platforms.

The company said it is temporarily limiting access to news content for under 4% of its Canadian users as it assesses possible responses to the bill. The change applies to its ubiquitous search engine as well as the Discover feature on Android devices, which carries news and sports stories.

All types of news content are being affected by the test, which will run for about five weeks, the company said. That includes content created by Canadian broadcasters and newspapers.

“We’re briefly testing potential product responses to Bill C-18 that impact a very small percentage of Canadian users,” Google spokesman Shay Purdy said in a written statement on Wednesday in response to questions from The Canadian Press.

The company runs thousands of tests each year to assess any potential changes to its search engine, he added.

“We’ve been fully transparent about our concern that C-18 is overly broad and, if unchanged, could impact products Canadians use and rely on every day,” Purdy said.

A spokeswoman for Canadian Heritage Minister Pablo Rodriguez said Canadians will not be intimidated and called it disappointing that Google is borrowing from Meta’s playbook. Last year, that company threatened to block news off its site in response to the bill.

“This didn’t work in Australia, and it won’t work here because Canadians won’t be intimidated. At the end of the day, all we’re asking the tech giants to do is compensate journalists when they use their work,” spokeswoman Laura Scaffidi said in a statement Wednesday.

“Canadians need to have access to quality, fact-based news at the local and national levels, and that’s why we introduced the Online News Act. Tech giants need to be more transparent and accountable to Canadians.”

Rodriguez has argued the bill, which is similar to a law that Australia passed in 2021, will “enhance fairness” in the digital news marketplace by creating a framework and bargaining process for online behemoths to pay media outlets.

But Google expressed concerns in a Parliament committee that the prospective law does not require publishers to adhere to basic journalistic standards, that it would favor large publishers over smaller outlets and that it could result in the proliferation of “cheap, low quality, clickbait content” over public interest journalism.

The company has said it would rather pay into a fund, similar to the Canada Media Fund, that would pay news publishers indirectly.

The bill passed the Canadian House of Commons in December and is set to be studied in the Senate in the coming months.

UNESCO Conference Tackles Disinformation, Hate Speech 

Participants at a global U.N. conference in France’s capital on Wednesday urged the international community to find better safeguards against online disinformation and hate speech.

Hundreds of officials, tech firm representatives, academics and members of civil society were invited to the two-day meeting hosted by the United Nation’s cultural fund to brainstorm how to best vet content while upholding human rights.

“Digital platforms have changed the way we connect and face the world, the way we face each other,” UNESCO Director-General Audrey Azoulay said in opening remarks.

But “only by fully evaluating this technological revolution can we ensure it is a revolution that does not compromise human rights, freedom of expression and democracy.”

UNESCO has warned that despite their benefits in communication and knowledge sharing, social media platforms rely on algorithms that “often prioritize engagement over safety and human rights.”

Filipina investigative journalist Maria Ressa, who jointly won the Nobel Peace Prize in 2021 for exposing abuses under former president Rodrigo Duterte, said social media had allowed lies to flourish.

“Our communication systems today are insidiously manipulating us,” she told attendees.

“We focus only on content moderation. It’s like there is a polluted river. We take a glass … we clean up the water and then dump it back,” she said.

But “what we have to do is to go all the way to the factory polluting the river, shut it down and then resuscitate the river.”

She said that at the height of online campaigns against her for her work, she had received up to 98 hate messages an hour.

A little over half sought to undermine her credibility as a journalist, including false claims that she peddled “fake news,” she said.

The rest were personal attacks targeting her gender, “skin color and sexuality” or even “threats of rape and murder.”

‘This must stop’

Brazilian President Luiz Inacio Lula earlier addressed the conference in a letter, after disgruntled supporters of his predecessor Jair Bolsonaro on January 8 invaded the presidential palace, Congress and the Supreme Court in Brasilia.

“What happened that day was the culmination of a campaign initiated much before, and that used as ammunition, lies and disinformation,” he said.

“To a large extent, this campaign was nurtured, organized and disseminated through several digital platforms and messaging apps,” he added.

“This must stop. The international community needs, from now on, to work to give effective answers to this challenging question of our times.”

Facebook whistleblower Christopher Wylie also contributed to the discussions.

The data scientist has revealed how he helped Cambridge Analytica, founded by former U.S. president Donald Trump’s former right-hand man Steve Bannon, to use unauthorized personal data harvested from Facebook to help swing a string of elections, including Trump’s U.S. presidential win in 2016.

“Many countries around the world have issued or are currently considering national legislation to address the spread of harmful content,” UNESCO said in a statement ahead of the conference.

But “some of this legislation risks infringing the human rights of their populations, particularly the right to freedom of expression and opinion,” it warned.

 

Supreme Court Weighs Google’s Liability in IS Terror Case

The Supreme Court is taking up its first case about a federal law that is credited with helping create the modern internet by shielding Google, Twitter, Facebook and other companies from lawsuits over content posted on their sites by others. 

The justices are hearing arguments Tuesday about whether the family of an American college student killed in a terrorist attack in Paris can sue Google for helping extremists spread their message and attract new recruits. 

The case is the court’s first look at Section 230 of the Communications Decency Act, adopted early in the internet age, in 1996, to protect companies from being sued over information their users post online. 

Lower courts have broadly interpreted the law to protect the industry, which the companies and their allies say has fueled the meteoric growth of the internet and encouraged the removal of harmful content. 

But critics argue that the companies have not done nearly enough and that the law should not block lawsuits over the recommendations, generated by computer algorithms, that point viewers to more material that interests them and keeps them online longer. 

Any narrowing of their immunity could have dramatic consequences that could affect every corner of the internet because websites use algorithms to sort and filter a mountain of data. 

“Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack,” Google’s lawyers wrote in their main Supreme Court brief. 

In response, the lawyers for the victim’s family questioned the prediction of dire consequences. “There is, on the other hand, no denying that the materials being promoted on social media sites have in fact caused serious harm,” the lawyers wrote. 

The lawsuit was filed by the family of Nohemi Gonzalez, a 23-year-old senior at Cal State Long Beach who was spending a semester in Paris studying industrial design. She was killed by Islamic State group gunmen in a series of attacks that left 130 people dead in November 2015. 

The Gonzalez family alleges that Google-owned YouTube aided and abetted the Islamic State group, also known as the Islamic State of Iraq and Syria, or ISIS, by recommending its videos to viewers most likely to be interested in them, in violation of the federal Anti-Terrorism Act. 

Lower courts sided with Google. 

A related case, set for arguments Wednesday, involves a terrorist attack at a nightclub in Istanbul in 2017 that killed 39 people and prompted a lawsuit against Twitter, Facebook and Google. 

Separate challenges to social media laws enacted by Republicans in Florida and Texas are pending before the high court, but they will not be argued before the fall and decisions probably won’t come until the first half of 2024. 

Amid ChatGPT Outcry, Some Teachers Are Inviting AI to Class

Under the fluorescent lights of a fifth grade classroom in Lexington, Kentucky, Donnie Piercey instructed his 23 students to try and outwit the “robot” that was churning out writing assignments.

The robot was the new artificial intelligence tool ChatGPT, which can generate everything from essays and haikus to term papers within seconds. The technology has panicked teachers and prompted school districts to block access to the site. But Piercey has taken another approach by embracing it as a teaching tool, saying his job is to prepare students for a world where knowledge of AI will be required.

“This is the future,” said Piercey, who describes ChatGPT as just the latest technology in his 17 years of teaching that prompted concerns about the potential for cheating. The calculator, spellcheck, Google, Wikipedia, YouTube. Now all his students have Chromebooks on their desks. “As educators, we haven’t figured out the best way to use artificial intelligence yet. But it’s coming, whether we want it to or not.”

One exercise in his class pitted students against the machine in a lively, interactive writing game. Piercey asked students to “Find the Bot:” Each student summarized a text about boxing champion and Kentucky icon Muhammad Ali, then tried to figure out which was written by the chatbot.

At the elementary school level, Piercey is less worried about cheating and plagiarism than high school teachers. His district has blocked students from ChatGPT while allowing teacher access. Many educators around the country say districts need time to evaluate and figure out the chatbot but also acknowledge the futility of a ban that today’s tech-savvy students can work around.

“To be perfectly honest, do I wish it could be uninvented? Yes. But it happened,” said Steve Darlow, the technology trainer at Florida’s Santa Rosa County District Schools, which has blocked the application on school-issued devices and networks.

He sees the advent of AI platforms as both “revolutionary and disruptive” to education. He envisions teachers asking ChatGPT to make “amazing lesson plans for a substitute” or even for help grading papers. “I know it’s lofty talk, but this is a real game changer. You are going to have an advantage in life and business and education from using it.”

ChatGPT quickly became a global phenomenon after its November launch, and rival companies including Google are racing to release their own versions of AI-powered chatbots.

The topic of AI platforms and how schools should respond drew hundreds of educators to conference rooms at the Future of Education Technology Conference in New Orleans last month, where Texas math teacher Heather Brantley gave an enthusiastic talk on the “Magic of Writing with AI for all Subjects.”

Brantley said she was amazed at ChatGPT’s ability to make her sixth grade math lessons more creative and applicable to everyday life.

“I’m using ChatGPT to enhance all my lessons,” she said in an interview. The platform is blocked for students but open to teachers at her school, White Oak Intermediate. “Take any lesson you’re doing and say, ‘Give me a real-world example,’ and you’ll get examples from today — not 20 years ago when the textbooks we’re using were written.”

For a lesson about slope, the chatbot suggested students build ramps out of cardboard and other items found in a classroom, then measure the slope. For teaching about surface area, the chatbot noted that sixth graders would see how the concept applies to real life when wrapping gifts or building a cardboard box, said Brantley.

She is urging districts to train staff to use the AI platform to stimulate student creativity and problem solving skills. “We have an opportunity to guide our students with the next big thing that will be part of their entire lives. Let’s not block it and shut them out.”

Students in Piercey’s class said the novelty of working with a chatbot makes learning fun.

After a few rounds of “Find the Bot,” Piercey asked his class what skills it helped them hone. Hands shot up. “How to properly summarize and correctly capitalize words and use commas,” said one student. A lively discussion ensued on the importance of developing a writing voice and how some of the chatbot’s sentences lacked flair or sounded stilted.

Trevor James Medley, 11, felt that sentences written by students “have a little more feeling. More backbone. More flavor.”

Next, the class turned to playwriting, or as the worksheet handed out by Piercey called it: “Pl-ai Writing.” The students broke into groups and wrote down (using pencils and paper) the characters of a short play with three scenes to unfold in a plot that included a problem that needs to get solved.

Piercey fed details from worksheets into the ChatGPT site, along with instructions to set the scenes inside a fifth grade classroom and to add a surprise ending. Line by line, it generated fully formed scripts, which the students edited, briefly rehearsed and then performed.

One was about a class computer that escapes, with students going on a hunt to find it. The play’s creators giggled over unexpected plot twists that the chatbot introduced, including sending the students on a time travel adventure.

“First of all, I was impressed,” said Olivia Laksi, 10, one of the protagonists. She liked how the chatbot came up with creative ideas. But she also liked how Piercey urged them to revise any phrases or stage directions they didn’t like. “It’s helpful in the sense that it gives you a starting point. It’s a good idea generator.”

She and classmate Katherine McCormick, 10, said they can see the pros and cons of working with chatbots. They can help students navigate writer’s block and help those who have trouble articulating their thoughts on paper. And there is no limit to the creativity it can add to classwork.

The fifth graders seemed unaware of the hype or controversy surrounding ChatGPT. For these children, who will grow up as the world’s first native AI users, their approach is simple: Use it for suggestions, but do your own work.

“You shouldn’t take advantage of it,” McCormick says. “You’re not learning anything if you type in what you want, and then it gives you the answer.

Angry Bing Chatbot Just Mimicking Humans, Experts Say

When Microsoft’s nascent Bing chatbot turns testy or even threatening, it’s likely because it essentially mimics what it learned from online conversations, analysts and academics said.

Tales of disturbing exchanges with the artificial intelligence chatbot, including it issuing threats and speaking of desires to steal nuclear code, create a deadly virus, or to be alive, have gone viral this week.

“I think this is basically mimicking conversations that it’s seen online,” Graham Neubig, an associate professor at Carnegie Mellon University’s language technologies institute, said Friday.

A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context.

However, humans taking part in banter with programs naturally tend to read emotion and intent into what a chatbot says. 

“Large language models have no concept of ‘truth,’ they just know how to best complete a sentence in a way that’s statistically probable based on their inputs and training set,” programmer Simon Willison said in a blog post. “So they make things up, and then state them with extreme confidence.”

Laurent Daudet, co-founder of French AI company LightOn, said that the chatbot seemingly gone rogue was trained on exchanges that themselves turned aggressive or inconsistent.

“Addressing this requires a lot of effort and a lot of human feedback, which is also the reason why we chose to restrict ourselves for now to business uses and not more conversational ones,” Daudet told AFP.

The Bing chatbot was designed by Microsoft and the startup OpenAI, which has been causing a sensation since the November launch of ChatGPT, the headline-grabbing app capable of generating all sorts of written content in seconds on a simple request.

Since ChatGPT debuted, the technology behind it, known as generative AI, has been stirring fascination and concern.

“The model at times tries to respond or reflect in the tone in which it is being asked to provide responses (and) that can lead to a style we didn’t intend,” Microsoft said in a blog post, noting the bot is a work in progress.

The Bing chatbot said in some shared exchanges that it had been codenamed Sydney during development, and that it was given rules of behavior.

Those rules include “Sydney’s responses should also be positive, interesting, entertaining and engaging,” according to online posts.

Disturbing dialogues that combine steely threats and professions of love could be the result of dueling directives to stay positive while mimicking what the AI mined from human exchanges, Willison said.

Chatbots seem to be more prone to disturbing or bizarre responses during lengthy conversations, losing a sense of where exchanges are going, eMarketer principal analyst Yoram Wurmser told AFP.

“They can really go off the rails,” Wurmser said.

Microsoft announced on Friday it had capped the amount of back-and-forth people can have with its chatbot over a given question, because “very long chat sessions can confuse the underlying chat model in the new Bing.”

Tesla Recalls ‘Full Self-Driving’ to Fix Unsafe Actions

U.S. safety regulators have pressured Tesla into recalling nearly 363,000 vehicles with its “Full Self-Driving” system because it misbehaves around intersections and doesn’t always follow speed limits.

The recall, part of a larger investigation by the National Highway Traffic Safety Administration into Tesla’s automated driving systems, is the most serious action taken yet against the electric vehicle maker.

It raises questions about CEO Elon Musk’s claims that he can prove to regulators that cars equipped with “Full Self-Driving” are safer than humans, and that humans almost never have to touch the controls.

Musk at one point had promised that a fleet of autonomous robotaxis would be in use in 2020. The latest action appears to push that development further into the future.

The safety agency says in documents posted on its website Thursday that Tesla will fix the concerns with an online software update in the coming weeks. The documents say Tesla is recalling the cars but does not agree with an agency analysis of the problem.

The system, which is being tested on public roads by as many as 400,000 Tesla owners, makes such unsafe actions as traveling straight through an intersection while in a turn-only lane, failing to come to a complete stop at stop signs, or going through an intersection during a yellow traffic light without proper caution, NHTSA said.

In addition, the system may not adequately respond to changes in posted speed limits, or it may not account for the driver’s adjustments in speed, the documents said.

“FSD beta software that allows a vehicle to exceed speed limits or travel through intersections in an unlawful or unpredictable manner increases the risk of a crash,” the agency said in documents.

Musk complained Thursday on Twitter, which he now owns, that calling an over-the-air software update a recall is “anachronistic and just flat wrong!” A message was left Thursday seeking further comment from Tesla, which has disbanded its media relations department.

Tesla has received 18 warranty claims that could be caused by the software from May 2019 through Sept. 12, 2022, the documents said. But the Austin, Texas, electric vehicle maker told the agency it is not aware of any deaths or injuries.

In a statement, NHTSA said it found the problems during tests performed as part of an investigation into Tesla’s “Full Self-Driving” and “Autopilot” software that take on some driving tasks. The investigation remains open, and the recall doesn’t address the full scope of what NHTSA is scrutinizing, the agency said.

Despite the names “Full Self-Driving” and “Autopilot,” Tesla says on its website that the cars cannot drive themselves and owners must be ready to intervene at all times.

The recall announced Thursday covers certain 2016-23 Model S and Model X vehicles, as well as 2017 through 2013 Model 3s, and 2020 through 2023 Model Y vehicles equipped with the software, or with installation pending.

US ‘Disruptive Technology’ Strike Force to Target National Security Threats

A top U.S. law enforcement official on Thursday unveiled a new “disruptive technology strike force” tasked with safeguarding American technology from foreign adversaries and other national security threats.

Deputy Attorney General Lisa Monaco, the No. 2 U.S. Justice Department official, made the announcement at a speech in London at Chatham House. The initiative, Monaco said, will be a joint effort between her department and the U.S. Commerce Department, with a goal of blocking adversaries from “trying to siphon our best technology.”

Monaco also addressed concerns about Chinese-owned video sharing app TikTok.

The U.S. government’s Committee on Foreign Investment in the United States, a powerful national security body, in 2020 ordered Chinese company ByteDance to divest TikTok because of fears that user data could be passed on to China’s government. The divestment has not taken place.

The committee and TikTok have been in talks for more than two years aiming to reach a national security agreement.

“I will note I don’t use TikTok, and I would not advise anybody to do so because of these concerns. The bottom line is China has been quite clear that they are trying to mold and put forward the use and norms around technologies that advance their privileges, their interests,” Monaco said.

The Justice Department in recent years has increasingly focused its efforts on bringing criminal cases to protect corporate intellectual property, U.S. supply chains and private data about Americans from foreign adversaries, either through cyberattacks, theft or sanctions evasion.

U.S. law enforcement officials have said that China by far remains the biggest threat to America’s technological innovation and economic security, a view that Monaco reiterated on Thursday.

“China’s doctrine of ‘civil-military fusion’ means that any advance by a Chinese company with military application must be shared with the state,” Monaco said. “So if a company operating in China collects your data, it is a good bet that the Chinese government is accessing it.”

Under former President Donald Trump’s administration, the Justice Department created a China initiative tasked with combating Chinese espionage and intellectual property theft.

President Joe Biden’s Justice Department later scrapped the name and re-focused the initiative amid criticism it was fueling racism by targeting professors at U.S. universities over whether they disclosed financial ties to China.

The department did not back away from continuing to pursue national security cases involving China and its alleged efforts to steal intellectual property or other American data.

The Commerce Department last year imposed new export controls on advanced computing and semiconductor components in a maneuver designed to prevent China from acquiring certain chips.

Monaco said on Thursday that the United States “must also pay attention to how our adversaries can use private investments in their companies to develop the most sensitive technologies, to fuel their drive for a military and national security edge.”

She noted that the Biden administration is “exploring how to monitor the flow of private capital in critical sectors” to ensure it “doesn’t provide our adversaries with a national security advantage.”

A bipartisan group of U.S. lawmakers last year called on Biden to issue an executive order to boost oversight of investments by U.S. companies and individuals in China and other countries.

Report Says US Justice Department Escalates Apple Probe

The United States Justice Department has in recent months escalated its antitrust probe on Apple Inc., The Wall Street Journal reported on Wednesday citing people familiar with the matter.  

Reuters had previously reported the Justice Department opened an antitrust probe into Apple in 2019. 

The Wall Street Journal report said more litigators have now been assigned, while new requests for documents and consultations have been made with all the companies involved. 

The probe will also look at whether Apple’s mobile operating system, iOS, is anti-competitive, favoring its own products over those of outside developers, the report added. 

The Justice Department declined to comment, while Apple did not immediately respond to a request for comment. 

Elon Musk Hopes to Have Twitter CEO Toward the End of Year 

Billionaire Elon Musk said Wednesday that he anticipates finding a CEO for Twitter “probably toward the end of this year.”

Speaking via a video call to the World Government Summit in Dubai, Musk said making sure the platform can function remained the most important thing for him.

“I think I need to stabilize the organization and just make sure it’s in a financial healthy place,” Musk said when asked about when he’d name a CEO. “I’m guessing probably toward the end of this year would be good timing to find someone else to run the company.”

Musk, 51, made his wealth initially on the finance website PayPal, then created the spacecraft company SpaceX and invested in the electric car company Tesla. In recent months, however, more attention has been focused on the chaos surrounding his $44 billion purchase of the microblogging site Twitter.

Meanwhile, the Ukrainian military’s use of Musk’s satellite internet service Starlink as it defends itself against Russia’s ongoing invasion has put Musk off and on at the center of the war.

Musk offered a wide-ranging 35-minute discussion that touched on the billionaire’s fears about artificial intelligence, the collapse of civilization and the possibility of space aliens. But questions about Twitter kept coming back up as Musk described both Tesla and SpaceX as able to function without his direct, day-to-day involvement.

“Twitter is still somewhat a startup in reverse,” he said. “There’s work required here to get Twitter to sort of a stable position and to really build the engine of software engineering.” 

Musk also sought to portray his takeover of San Francisco-based Twitter as a cultural correction. 

“I think that the general idea is just to reflect the values of the people as opposed to imposing the values of essentially San Francisco and Berkeley, which are so somewhat of a niche ideology as compared to the rest of the world,” he said. “And, you know, Twitter was, I think, doing a little too much to impose a niche.”

Musk’s takeover at Twitter has seen mass firings and other cost-cutting measures. Musk, who is on the hook for about $1 billion in yearly interest payments for his purchase, has been trying to find way to maximize profits at the company.

However, some of Musk’s decisions have conflicted with the reasons that journalists, governments and others rely on Twitter as an information-sharing platform.

Musk on Wednesday described the need for users to rely on Twitter for trusted information from verified accounts. However, a confused rollout to a paid verified account system saw some impersonate famous companies, leading to a further withdrawal of needed advertising cash to the site.

“Twitter is certainly quite the rollercoaster,” he acknowledged.

Forbes estimates Musk’s wealth at just under $200 billion. The Forbes analysis ranks Musk as the second-wealthiest person on Earth, just behind French luxury brand magnate Bernard Arnault. 

But Musk also has become a thought leader for some as well, albeit an oracle that is trying to get six hours of sleep a night despite the challenges at Twitter.

Musk described his children as being “programmed by Reddit and YouTube.” However, he criticized the Chinese-made social media app TikTok.

“TikTok has a lot of very high usage (but) I often hear people say, ‘Well, I spent two hours on TikTok, but I regret those two hours,’” Musk said. “We don’t want that to be the case with Twitter.”

TikTok, owned by Beijing-based ByteDance, did not immediately respond to a request for comment. 

Musk warned that artificial intelligence should be regulated “very carefully,” describing it as akin to the promise of nuclear power but the danger of atomic bombs. He also cautioned against having a single civilization or “too much cooperation” on Earth, saying it could “collapse” a society that’s like a “tiny candle in a vast darkness.”

And when asked about the existence of aliens, Musk had a firm response.

“The crazy thing is, I’ve seen no evidence of alien technology or alien life whatsoever. And I think I’d know because of SpaceX,” he said. “I don’t think anybody knows more about space, you know, than me.” 

11 States Consider ‘Right to Repair’ for Farming Equipment

On Colorado’s northeastern plains, where the pencil-straight horizon divides golden fields and blue sky, a farmer named Danny Wood scrambles to plant and harvest proso millet, dryland corn and winter wheat in short, seasonal windows. That is until his high-tech Steiger 370 tractor conks out. 

The tractor’s manufacturer doesn’t allow Wood to make certain fixes himself, and last spring his fertilizing operations were stalled for three days before the servicer arrived to add a few lines of missing computer code for $950. 

“That’s where they have us over the barrel, it’s more like we are renting it than buying it,” said Wood, who spent $300,000 on the used tractor. 

Wood’s plight, echoed by farmers across the country, has pushed lawmakers in Colorado and 10 other states to introduce bills that would force manufacturers to provide the tools, software, parts and manuals needed for farmers to do their own repairs — thereby avoiding steep labor costs and delays that imperil profits. 

“The manufacturers and the dealers have a monopoly on that repair market because it’s lucrative,” said Rep. Brianna Titone, a Democrat and one of the bill’s sponsors. “[Farmers] just want to get their machine going again.” 

In Colorado, the legislation is largely being pushed by Democrats, while their Republican colleagues find themselves stuck in a tough spot: torn between right-leaning farming constituents asking to be able to repair their own machines and the manufacturing businesses that oppose the idea. 

The manufacturers argue that changing the current practice with this type of legislation would force companies to expose trade secrets. They also say it would make it easier for farmers to tinker with the software and illegally crank up the horsepower and bypass the emissions controller — risking operators’ safety and the environment. 

Similar arguments around intellectual property have been leveled against the broader campaign called ‘right to repair,’ which has picked up steam across the country — crusading for the right to fix everything from iPhones to hospital ventilators during the pandemic. 

In 2011, Congress tried passing a right to repair law for car owners and independent servicers. That bill did not pass, but a few years later, automotive industry groups agreed to a memorandum of understanding to give owners and independent mechanics — not just authorized dealerships — access to tools and information to fix problems. 

In 2021, the Federal Trade Commission pledged to beef up its right to repair enforcement at the direction of President Joe Biden. And just last year, Titone sponsored and passed Colorado’s first right to repair law, empowering people who use wheelchairs with the tools and information to fix them. 

For the right to repair farm equipment — from thin tractors used between grape vines to behemoth combines for harvesting grain that can cost over half a million dollars — Colorado is joined by 10 states including Florida, Maryland, Missouri, New Jersey, Texas and Vermont. 

Many of the bills are finding bipartisan support, said Nathan Proctor, who leads Public Interest Research Group’s national right to repair campaign. But in Colorado’s House committee on agriculture, Democrats pushed the bill forward in a 9-4 vote along party lines, with Republicans in opposition even though the bill’s second sponsor is Republican Representative Ron Weinberg. 

“That’s really surprising, and that upset me,” said the Republican farmer Wood. 

Wood’s tractor, which flies an American flag reading “Farmers First,” isn’t his only machine to break down. His grain harvesting combine was dropping into idle, but the servicer took five days to arrive on Wood’s farm — a setback that could mean a hail storm decimates a wheat field or the soil temperature moves beyond the Goldilocks zone for planting. 

“Our crop is ready to harvest and we can’t wait five days, but there was nothing else to do,” said Wood. “When it’s broke down you just sit there and wait and that’s not acceptable. You can be losing $85,000 a day.” 

Representative Richard Holtorf, the Republican who represents Wood’s district and is a farmer himself, said he’s being pulled between his constituents and the dealerships in his district covering the largely rural northeast corner of the state. He voted against the measure because he believes it will financially hurt local dealerships in rural areas and could jeopardize trade secrets. 

“I do sympathize with my farmers,” Holtorf said, but he added, “I don’t think it’s the role of government to be forcing the sale of their intellectual property.”  

At the packed hearing last week that spilled into a second room in Colorado’s Capitol, the core concerns raised in testimony were farmers illegally slipping around the emissions control and cranking up the horsepower. 

“I know growers, if they can change horsepower and they can change emissions they are going to do it,” said Russ Ball, sales manager at 21st Century Equipment, a John Deere dealership in Western states. 

The bill’s proponents acknowledged that the legislation could make it easier for operators to modify horsepower and emissions controls but argued that farmers are already able to tinker with their machines and doing so would remain illegal. 

This January, the Farm Bureau and the farm equipment manufacturer John Deere did sign a memorandum of understanding — a right to repair agreement made in the free market and without government intervention. The agreement stipulates that John Deere will share some parts, diagnostic and repair codes and manuals to allow farmers to make their own fixes. 

The Colorado bill’s detractors laud that agreement as a strong middle ground while Titone said it wasn’t enough, evidenced by six of Colorado’s biggest farmworker associations that support the bill. 

Proctor, who is tracking 20 right to repair proposals in a number of industries across the country, said the memorandum of understanding has fallen far short. 

“Farmers are saying no,” Proctor said. “We want the real thing.” 

China-Owned Parent Company of TikTok Among Top Spenders on Internet Lobbying

ByteDance, the Chinese parent company of social media platform TikTok, has dramatically upped its U.S. lobbying effort since 2020 as U.S.-China relations continue to sour and is now the fourth-largest Internet company in spending on federal lobbying as of last year, according to newly released data.

Publicly available information collected by OpenSecrets, a Washington nonprofit that tracks campaign finance and lobbying data, shows that ByteDance and its subsidiaries, including TikTok, the wildly popular short video app, have spent more than $13 million on U.S. lobbying since 2020. In 2022 alone, Fox News reported, the companies spent $5.4 million on lobbying.

Only Amazon.com ($19.7 million) and the parent companies of Google ($11 million) and Facebook ($19 million) spent more, according to OpenSecrets.

In the fourth quarter of 2022, ByteDance spent $1.2 million on lobbying, according to Fox News.

The lobbyists hired by ByteDance include former U.S. senators Trent Lott and John Breaux; David Urban, a former senior adviser to Donald Trump’s 2016 presidential campaign who was also a former chief of staff for the late Senator Arlen Specter; Layth Elhassani, special assistant to President Barack Obama in the White House Office of Legislative Affairs; and Samantha Clark, former deputy staff director of the U.S. Senate Armed Services Committee.

In November, TikTok hired Jamal Brown, a deputy press secretary at the Pentagon who was national press secretary for Joe Biden’s presidential campaign, to manage policy communications for the Americas, with a focus on the U.S., according to Politico.

“This is kind of the template for how modern tech lobbying goes,” Dan Auble, a senior researcher at Open Secrets, told Vox. “These companies come on the scene and suddenly start spending substantial amounts of money. And ByteDance has certainly done that.”

U.S. officials have criticized TikTok as a security risk due to ties between ByteDance and the Chinese government. The worry is that user data collected by TikTok could be passed to Beijing, so lawmakers have been trying to regulate or even ban the app in the U.S.

In 2019, TikTok paid a $5.7 million fine as part of a settlement with the Federal Trade Commission over violating children’s privacy rights. The Trump administration attempted unsuccessfully to ban downloads of TikTok from app stores and outlaw transactions between Americans and ByteDance.

As of late December, TikTok has been banned on federally managed devices, and 19 states had at least partially blocked the app from state-managed devices.

The number of federal bills that ByteDance has been lobbying on increased to 14 in 2022 from eight in 2020.

With TikTok CEO Shou Zi Chew scheduled to testify before the U.S. House of Representatives Energy and Commerce Committee on March 23, and a House of Representatives Foreign Affairs Committee vote in March on a bill that would ban the use of TikTok in the U.S., the company is expected to further expand its U.S. influence campaign.

Erich Andersen, general counsel and head of corporate affairs at ByteDance and TikTok, told the New York Times in January that “it was necessary for us to accelerate our own explanation of what we were prepared to do and the level of commitments on the national security process.”

TikTok has been met with a mixed response to its efforts to prove that its operations in the U.S. are outside of Beijing’s sphere of influence.

Michael Beckerman, who oversees public policy for the Americas at TikTok, met with Mike Gallagher, chairman of the U.S. House of Representatives Select Committee on China Affairs, on February 1 to explain the company’s U.S. data security plans.

According to Reuters, Gallagher’s spokesperson, Jordan Dunn, said after the meeting that the lawmaker “found their argument unpersuasive.”

Congressman Ken Buck and Senator Josh Hawley on January 25 introduced a bill, No TikTok on United States Devices Act, which will instruct President Joe Biden to use the International Emergency Economic Powers to prohibit downloads of TikTok and ban commercial activity with ByteDance.

Joel Thayer, president of the Digital Progress Institute and a telecom regulation lawyer, told VOA Mandarin that he doubted the Buck-Hawley bill would become law. He said that calls to ban TikTok began during the Trump administration, yet TikTok has remained a visible and influential presence in the U.S.

James Lewis, director of the CSIS Technology and Public Policy Program, told VOA Mandarin, “An outright ban will be difficult because TikTok is speech, which is protected speech. But it [the U.S. government] can ban financial transactions, that’s possible.”

Senators Marco Rubio and Angus King reintroduced bipartisan legislation on February 10 to ban TikTok and other similar apps from operating in the U.S. by “blocking and prohibiting all transactions from any social media company in, or under the influence of, China, Russia, and several other foreign countries of concern unless they fully divest of dangerous foreign ownership.”

The Committee on Foreign Investment in the United States (CFIUS), an interagency group that reviews transactions involving foreign parties for possible national security threats, ordered ByteDance to divest TikTok in 2020. The two parties have yet to reach an agreement after two years of talks.

Chuck Flint, vice president of strategic relationships at Breitbart News who is also the former chief of staff for Senator Marsha Blackburn, told VOA Mandarin, “I expect that CFIUS will be hesitant to ban TikTok. Anything short of an outright ban will leave China’s TikTok data pipeline in place.”

China experts believe that TikTok wants to reach an agreement with CFIUS rather than being banned from the U.S. or being forced to sell TikTok’s U.S. business to an American company.

Lewis of CSIS said, “Every month that we don’t do CFIUS is a step closer towards some kind of ban.”

Julian Ku, professor of law and faculty director of international programs at Hofstra University, told VOA Mandarin, “The problem is that no matter what they offer, there’s no way to completely shield the data from the Chinese government … as long as there continues to be a shared entity.”

Adrianna Zhang contributed to this report.

Google to Expand Misinformation ‘Prebunking’ in Europe

After seeing promising results in Eastern Europe, Google will initiate a new campaign in Germany that aims to make people more resilient to the corrosive effects of online misinformation.

The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works.

It’s an approach called prebunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies. 

“There’s a real appetite for solutions,” said Beth Goldberg, head of research and development at Jigsaw, an incubator division of Google that studies emerging social challenges. “Using ads as a vehicle to counter a disinformation technique is pretty novel. And we’re excited about the results.”

While belief in falsehoods and conspiracy theories isn’t new, the speed and reach of the internet has given them a heightened power. When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence.

It’s a challenge with few easy solutions. Journalistic fact checks are effective, but they’re labor intensive, aren’t read by everyone, and won’t convince those already distrustful of traditional journalism. Content moderation by tech companies is another response, but it only drives misinformation elsewhere, while prompting cries of censorship and bias.

Prebunking videos, by contrast, are relatively cheap and easy to produce and can be seen by millions when placed on popular platforms. They also avoid the political challenge altogether by focusing not on the topics of false claims, which are often cultural lightning rods, but on the techniques that make viral misinformation so infectious.

Those techniques include fear-mongering, scapegoating, false comparisons, exaggeration and missing context. Whether the subject is COVID-19, mass shootings, immigration, climate change or elections, misleading claims often rely on one or more of these tricks to exploit emotions and short-circuit critical thinking.

Last fall, Google launched the largest test of the theory so far with a prebunking video campaign in Poland, the Czech Republic and Slovakia. The videos dissected different techniques seen in false claims about Ukrainian refugees. Many of those claims relied on alarming and unfounded stories about refugees committing crimes or taking jobs away from residents.

The videos were seen 38 million times on Facebook, TikTok, YouTube and Twitter — a number that equates to a majority of the population in the three nations. Researchers found that compared to people who hadn’t seen the videos, those who did watch were more likely to be able to identify misinformation techniques, and less likely to spread false claims to others.

The pilot project was the largest test of prebunking so far and adds to a growing consensus in support of the theory.

“This is a good news story in what has essentially been a bad news business when it comes to misinformation,” said Alex Mahadevan, director of MediaWise, a media literacy initiative of the Poynter Institute that has incorporated prebunking into its own programs in countries including Brazil, Spain, France and the U.S.

Mahadevan called the strategy a “pretty efficient way to address misinformation at scale, because you can reach a lot of people while at the same time address a wide range of misinformation.”

Google’s new campaign in Germany will include a focus on photos and videos, and the ease with which they can be presented of evidence of something false. One example: Last week, following the earthquake in Turkey, some social media users shared video of the massive explosion in Beirut in 2020, claiming it was actually footage of a nuclear explosion triggered by the earthquake. It was not the first time the 2020 explosion had been the subject of misinformation.

Google will announce its new German campaign Monday ahead of next week’s Munich Security Conference. The timing of the announcement, coming before that annual gathering of international security officials, reflects heightened concerns about the impact of misinformation among both tech companies and government officials.

Tech companies like prebunking because it avoids touchy topics that are easily politicized, said Sander van der Linden, a University of Cambridge professor considered a leading expert on the theory. Van der Linden worked with Google on its campaign and is now advising Meta, the owner of Facebook and Instagram, as well.

Meta has incorporated prebunking into many different media literacy and anti-misinformation campaigns in recent years, the company told The Associated Press in an emailed statement.

They include a 2021 program in the U.S. that offered media literacy training about COVID-19 to Black, Latino and Asian American communities. Participants who took the training were later tested and found to be far more resistant to misleading COVID-19 claims.

Prebunking comes with its own challenges. The effects of the videos eventually wears off, requiring the use of periodic “booster” videos. Also, the videos must be crafted well enough to hold the viewer’s attention, and tailored for different languages, cultures and demographics. And like a vaccine, it’s not 100% effective for everyone.

Google found that its campaign in Eastern Europe varied from country to country. While the effect of the videos was highest in Poland, in Slovakia they had “little to no discernible effect,” researchers found. One possible explanation: The videos were dubbed into the Slovak language, and not created specifically for the local audience.

But together with traditional journalism, content moderation and other methods of combating misinformation, prebunking could help communities reach a kind of herd immunity when it comes to misinformation, limiting its spread and impact.

“You can think of misinformation as a virus. It spreads. It lingers. It can make people act in certain ways,” Van der Linden told the AP. “Some people develop symptoms, some do not. So: if it spreads and acts like a virus, then maybe we can figure out how to inoculate people.”

Russian Spacecraft Loses Pressure; ISS Crew Safe

An uncrewed Russian supply ship docked at the International Space Station has lost cabin pressure, the Russian space corporation reported Saturday, saying the incident doesn’t pose any danger to the station’s crew.

Roscosmos said the hatch between the station and the Progress MS-21 had been locked so the loss of pressure didn’t affect the orbiting outpost.

“The temperature and pressure on board the station are within norms and there is no danger to health and safety of the crew,” it said in a statement.

The space corporation didn’t say what may have caused the cargo ship to lose pressure.

Roscosmos noted that the cargo ship had already been loaded with waste before its scheduled disposal. The craft is set to be undocked from the station and deorbit to burn in the atmosphere Feb. 18.

The announcement came shortly after a new Russian cargo ship docked smoothly at the station Saturday. The Progress MS-22 delivered almost 3 tons of food, water and fuel along with scientific equipment for the crew.

Roscosmos said that the loss of pressure in the Progress MS-21 didn’t affect the docking of the new cargo ship and “will have no impact on the future station program.”

The depressurization of the cargo craft follows an incident in December with the Soyuz crew capsule, which was hit by a tiny meteoroid that left a small hole in the exterior radiator and sent coolant spewing into space.

Russian cosmonauts Sergey Prokopyev and Dmitri Petelin, and NASA astronaut Frank Rubio were supposed to use the capsule to return to Earth in March, but Russian space officials decided that higher temperatures resulting from the coolant leak could make it dangerous to use.

They decided to launch a new Soyuz capsule February 20 so the crew would have a lifeboat in the event of an emergency. But since it will travel in automatic mode to expedite the launch, a replacement crew will now have to wait until late summer or fall when another capsule is ready. It means that Prokopyev, Petelin and Rubio will have to stay several extra months at the station, possibly pushing their mission to close to a year.

NASA took part in all the discussions and agreed with the plan.

Besides Prokopyev, Petelin and Rubio, the space station is home to NASA astronauts Nicole Mann and Josh Cassada, Russian Anna Kikina, and Japan’s Koichi Wakata. The four rode up on a SpaceX capsule last October.

Schools Ban ChatGPT Amid Fears of Artificial Intelligence-Assisted Cheating 

Since its release in late 2022, an artificial intelligence-powered writing tool called ChatGPT has won instant acclaim but has also raised concerns, especially on school campuses.

High school senior Galvin Fickes recently demonstrated how entering a short command can generate a summary of Jane Eyre, a book she was assigned to read.

“I think it did a pretty good job, honestly,” said Fickes, who has used the software to help with studying.

Across the U.S., school districts are choosing to restrict access to ChatGPT on their computers and networks.

Developed by San Francisco-based OpenAI, ChatGPT is trained on a vast amount of language data from the internet. When prompted, the AI generates a response using the most likely sequence of words, creating original text that mimics human thought.

Some teachers like LuPaulette Taylor are concerned that the freely available tool could be used by students to do their homework and undermine learning. She listed the skills she worries will be affected by students having access to AI programs like ChatGPT.

“The critical thinking that we all need as human beings, and the creativity, and also the benefit of having done something yourself and saying, ‘I did that,’” said Taylor, who teaches high school English at an Oakland, California, public school.

Annie Chechitelli, who is chief product officer for Turnitin, an academic integrity service used by educators in 140 countries, said AI plagiarism presents a new challenge.

“There’s no, what we call, ‘source document,’ right?” she said. “Or a smoking gun to look to, to say, ‘Yes, this looks like it was lifted from that.’”

Turnitin’s anti-plagiarism software checks the authenticity of a student paper by scanning the internet for possible matches. But when AI writes text, each line is novel and unique, making it hard to detect cheating.

There is, however, one distinguishing feature of AI writing, said Eric Wang, vice president for AI at Turnitin.

“They tend to write in a very, very average way,” he said. “Humans all have idiosyncrasies. We all deviate from average one way or another. So, we’re able to build detectors that look for cases where an entire document or entire passage is uncannily average.”

Turnitin’s ChatGPT detector is due out later this year. Wang said keeping up with AI tools will be an ongoing challenge that will transform education.

“A lot of things that we hold as norms and as status quo are going to have to shift as a result of this technology,” he said.

AI may become acceptable for some uses in the classroom, just as calculators eventually did.

Computer science teacher Steve Wright said he was impressed when his student used ChatGPT to create a study guide for her calculus class.

“You know, if ChatGPT can make us throw up our hands and say, ‘No longer can I ask a student to regurgitate a process, but now I’m going to have to actually dig in and watch them think, to know if they’re learning’ — that’s fantastic,” said Wright.

In schools and elsewhere, it seems clear that AI will have a role in writing the future.

Several US Universities to Experiment With Micro Nuclear Power 

If your image of nuclear power is giant, cylindrical concrete cooling towers pouring out steam on a site that takes up hundreds of acres of land, soon there will be an alternative: tiny nuclear reactors that produce only one-hundredth the electricity and can even be delivered on a truck.

Small but meaningful amounts of electricity — nearly enough to run a small campus, a hospital or a military complex, for example — will pulse from a new generation of micronuclear reactors. Now, some universities are taking interest.

“What we see is these advanced reactor technologies having a real future in decarbonizing the energy landscape in the U.S. and around the world,” said Caleb Brooks, a nuclear engineering professor at the University of Illinois at Urbana-Champaign.

The tiny reactors carry some of the same challenges as large-scale nuclear, such as how to dispose of radioactive waste and how to make sure they are secure. Supporters say those issues can be managed and the benefits outweigh any risks.

Universities are interested in the technology not just to power their buildings but to see how far it can go in replacing the coal and gas-fired energy that causes climate change. The University of Illinois hopes to advance the technology as part of a clean energy future, Brooks said. The school plans to apply for a construction permit for a high-temperature, gas-cooled reactor developed by the Ultra Safe Nuclear Corporation, and aims to start operating it by early 2028. Brooks is the project lead.

Microreactors will be “transformative” because they can be built in factories and hooked up on site in a plug-and-play way, said Jacopo Buongiorno, professor of nuclear science and engineering at the Massachusetts Institute of Technology. Buongiorno studies the role of nuclear energy in a clean energy world.

“That’s what we want to see, nuclear energy on demand as a product, not as a big mega project,” he said.

Both Buongiorno and Marc Nichol, senior director for new reactors at the Nuclear Energy Institute, view the interest by schools as the start of a trend.

Last year, Penn State University signed a memorandum of understanding with Westinghouse to collaborate on microreactor technology. Mike Shaqqo, the company’s senior vice president for advanced reactor programs, said universities are going to be “one of our key early adopters for this technology.”

Penn State wants to prove the technology so that Appalachian industries, such as steel and cement manufacturers, may be able to use it, said Professor Jean Paul Allain, head of the nuclear engineering department. Those two industries tend to burn dirty fuels and have very high emissions. Using a microreactor also could be one of several options to help the university use less natural gas and achieve its long-term carbon emissions goals, he said.

“I do feel that microreactors can be a game-changer and revolutionize the way we think about energy,” Allain said.

For Allain, microreactors can complement renewable energy by providing a large amount of power without taking up much land. A 10-megawatt microreactor could go on less than an acre, whereas windmills or a solar farm would need far more space to produce 10 megawatts, he added. The goal is to have one at Penn State by the end of the decade.

Purdue University in Indiana is working with Duke Energy on the feasibility of using advanced nuclear energy to meet its long-term energy needs.

Nuclear reactors that are used for research are nothing new on campus. About two dozen U.S. universities have them. But using them as an energy source is new.

Back at the University of Illinois, Brooks explains the microreactor would generate heat to make steam. While the excess heat from burning coal and gas to make electricity is often wasted, Brooks sees the steam production from the nuclear microreactor as a plus, because it’s a carbon-free way to deliver steam through the campus district heating system to radiators in buildings, a common heating method for large facilities in the Midwest and Northeast. The campus has hundreds of buildings.

The 10-megawatt microreactor wouldn’t meet all of the demand, but it would serve to demonstrate the technology, as other communities and campuses look to transition away from fossil fuels, Brooks said.

One company that is building microreactors that the public can get a look at today is Last Energy, based in Washington, D.C. It built a model reactor in Brookshire, Texas that’s housed in an edgy cube covered in reflective metal.

Now it’s taking that apart to test how to transport the unit. A caravan of trucks is taking it to Austin, where company founder Bret Kugelmass is scheduled to speak at the South by Southwest conference and festival.

Kugelmass, a technology entrepreneur and mechanical engineer, is talking with some universities, but his primary focus is on industrial customers. He’s working with licensing authorities in the United Kingdom, Poland and Romania to try to get his first reactor running in Europe in 2025.

The urgency of the climate crisis means zero-carbon nuclear energy must be scaled up soon, he said.

“It has to be a small, manufactured product as opposed to a large, bespoke construction project,” he said.

Traditional nuclear power costs billions of dollars. An example is two additional reactors at a plant in Georgia that will end up costing more than $30 billion.

The total cost of Last Energy’s microreactor, including module fabrication, assembly and site prep work, is under $100 million, the company says.

Westinghouse, which has been a mainstay of the nuclear industry for over 70 years, is developing its “eVinci” microreactor, Shaqqo said, and is aiming to get the technology licensed by 2027.

The Department of Defense is working on a microreactor too. Project Pele is a DOD prototype mobile nuclear reactor under design at the Idaho National Laboratory.

Abilene Christian University in Texas is leading a group of three other universities with the company Natura Resources to design and build a research microreactor cooled by molten salt to allow for high temperature operations at low pressure, in part to help train the next generation nuclear workforce.

But not everyone shares the enthusiasm. Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists, called it “completely unjustified.”

Microreactors in general will require much more uranium to be mined and enriched per unit of electricity generated than conventional reactors do, he said. He said he also expects fuel costs to be substantially higher and that more depleted uranium waste could be generated compared to conventional reactors.

“I think those who are hoping that microreactors are going to be the silver bullet for solving the climate change crisis are simply betting on the wrong horse,” he said.

Lyman also said he fears microreactors could be targeted for a terrorist attack, and some designs would use fuels that could be attractive to terrorists seeking to build crude nuclear weapons. The UCS does not oppose using nuclear power, but wants to make sure it’s safe.

The United States does not have a national storage facility for storing spent nuclear fuel and it’s piling up. Microreactors would only compound the problem and spread the radioactive waste around, Lyman said.

A 2022 Stanford-led study found that smaller modular reactors — the next size up from micro — will generate more waste than conventional reactors. Lead author Lindsay Krall said this week that the design of microreactors would make them subject to the same issue.

Kugelmass sees only promise. Nuclear, he said, has been “totally misunderstood and under leveraged.” It will be “the key pillar of our energy transformation moving forward.”