Elon Musk raised $6 billion for GROK artificial intelligence startups to purchase NVIDIA chips

robot
Abstract generation in progress

Elon Musk, the founder of Tesla and the first DOGE whale, will finance xAI under his umbrella with a market valuation of $50 billion and raise $6 billion. According to CNBC, the funds will arrive next week, with $5 billion from a Middle Eastern sovereign fund and $1 billion from other investors. The funds will be used to purchase 100,000 Nvidia chips.

Elon Musk's xAI artificial intelligence startup has chosen Memphis, Tennessee to build a supercomputer named Colossus, with Nvidia's AI chips playing a key role. It is currently composed of 100,000 Nvidia H100 GPUs, each costing about $30,000. Musk hopes to continue using Nvidia chips and upgrade the facility with Nvidia's H200 GPUs, which can increase memory capacity, but cost nearly $40,000 per unit. Next summer, Musk plans to purchase an additional 300,000 Nvidia Blackwell B200 GPUs.

In November last year, xAI launched the Grok chatbot, preparing to compete with Sam Altman's ChatGPT in the AI market. Musk was an early angel investor in Open AI, but later left due to conflicts with Sam Altman, and is now Sam Altman's AI competitor.

About xAI Grok

Grok is an artificial intelligence developed by xAI that can provide useful and real answers to various questions. The concept of Grok is inspired by The Hitchhiker's Guide to the Galaxy and J.A.R.V.I.S., the artificial intelligence system created by Tony Stark in Iron Man. Its purpose is to help users understand the science of the universe and answer any questions. According to Grok itself, this explanation comes from Grok. Grok will provide information to users honestly and without criticism, focusing on understanding and detailed explanations, and even looking at human problems from a humorous or external perspective.

How to use Nvidia chips to build supercomputers with XAI

Using H100 GPU chips, xAI creates a supercomputer called Colossus for the purpose of artificial intelligence training. xAI utilizes Nvidia's GPUs to not only obtain computational power but also to accelerate artificial intelligence and machine learning with specialized infrastructure, enabling xAI to surpass the boundaries of AI research and development.

The six application scenarios of Nvidia X Grok

Massive Parallel Processing: Nvidia's GPU (H100) is designed for parallel processing, which is crucial for the complex calculations required for AI model training. These chips can process thousands of operations simultaneously, accelerating the training process of Grok's Large Language Model (LLM).

Scalability: The supercomputer Colossus was initially composed of 100,000 Nvidia H100 GPUs, allowing xAI to handle a large amount of computational workloads, far beyond typical CPU processing capabilities, and plans to double the quantity to 200,000 GPUs (including H200 chips).

Energy Efficiency: Large-scale operations have high energy demands, but compared to traditional CPUs, Nvidia's GPUs are more energy efficient for xAI operations.

Network Infrastructure: The supercomputer Colossus uses Nvidia's Spectrum-X ETH network platform to support multi-tenancy by providing low latency, high bandwidth connections.

Advanced Features for AI: Nvidia's H100 and H200 chips are equipped with custom features for AI, including high-speed memory (HBM3 and HBM3e respectively), which can reduce the data transfer time between memory and GPU computing cores. This feature is very useful for AI workloads where data movement may become a bottleneck.

Software Support: Nvidia provides the CUDA parallel computing platform and programming model. xAI uses CUDA to develop AI algorithm applications.

This article, Musk financed Grok, an AI startup, with $6 billion to purchase Nvidia chips, first appeared on Chain News ABMedia.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments