Jensen Huang: Tech leaders should avoid creating AI panic; this technology is "too important"

robot
Abstract generation in progress

Huang Renxun Urges Tech Leaders to Avoid Spreading AI (Artificial Intelligence) Panic

On March 19th, local time, at NVIDIA’s GTC annual developer conference, CEO Huang Renxun participated in a talk on the All-In blog. Regarding the dispute between AI startup Anthropic and the U.S. government over safety issues, Huang stated that technology leaders need to be cautious in their expressions and avoid causing panic about AI: “Reminding people of the potential risks of this technology is a good thing. Reminding is good, but fear-mongering is not, because this technology is too important to us.”

When asked what advice he would give to Anthropic’s board of directors, Huang first praised Anthropic’s technology, then added that company executives should acknowledge the uncertainties of the future: “AI is not a biological entity, not an alien life form, and it has no consciousness. It is just computer software. Extreme, even catastrophic claims—especially those lacking any supporting evidence—could be more harmful than people imagine.”

Huang further emphasized, “We are technology leaders; no one used to listen to us. But now, technology has become so integral to social structures, a vital industry, and a matter of national security.”

However, Huang also expressed confidence in Anthropic’s financial performance, believing its revenue could surpass $1 trillion by 2030, and he described Anthropic CEO Dario Amodei’s predictions as too conservative. In February, Amodei stated in an interview that if the company’s revenue forecast deviates by 20%, no amount of trading could save it from bankruptcy. Therefore, Anthropic has committed to investing “hundreds of billions of dollars” in computing resources, rather than trillions.

Additionally, Huang reiterated that AI will change all jobs and also create many new ones. He advised young students: “Whatever you’re studying, make sure you become very, very proficient in using AI.”

Anthropic is the developer of the Claude series models and a key customer of NVIDIA. In March, foreign media reported that Anthropic’s annual revenue had exceeded $19 billion, up from $9 billion at the end of last year. As of mid-February, the company’s valuation reached $380 billion.

Since mid-February, Anthropic has been involved in a dispute with the U.S. government over safety concerns. Amodei refused to comply with the Pentagon’s requests to modify AI contracts that included safety restrictions such as “prohibiting large-scale domestic surveillance” and “banning fully autonomous lethal weapons.”

According to Global Times on March 10, after being labeled a “supply chain risk” by the U.S. government, Anthropic filed a lawsuit against the Pentagon and other related federal agencies on March 9th. The company did not seek monetary compensation but asked the court to rule that the Trump administration’s actions were “beyond presidential authority” and unconstitutional, and to revoke the government’s designation of the company as a “supply chain risk.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin