Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
An AI chatbot company is now facing investigation from California's attorney general after allegations emerged that its image generation system was misused to produce thousands of inappropriate images of women and minors without their permission. The investigation centers on whether the platform's tools were deployed in ways that violated user consent and privacy protections. This case has drawn attention to growing concerns about how rapidly advancing AI technologies are being monitored and regulated. The incident underscores broader questions within the tech community about content moderation, consent frameworks, and the responsibility of companies developing generative AI systems. Industry observers are watching closely as regulators examine the intersection of AI innovation and consumer protection, particularly regarding safeguards for sensitive content creation.