✍️ Gate 廣場「創作者認證激勵計劃」進行中!
我們歡迎優質創作者積極創作,申請認證
贏取豪華代幣獎池、Gate 精美周邊、流量曝光等超過 $10,000+ 豐厚獎勵!
立即報名 👉 https://www.gate.com/questionnaire/7159
📕 認證申請步驟:
1️⃣ App 首頁底部進入【廣場】 → 點擊右上角頭像進入個人主頁
2️⃣ 點擊頭像右下角【申請認證】進入認證頁面,等待審核
讓優質內容被更多人看到,一起共建創作者社區!
活動詳情:https://www.gate.com/announcements/article/47889
Yesterday's 315 evening gala specifically called out: AI large models are becoming a new battleground for advertising, and people are already systematically poisoning them. (New dark industry in advertising markets)
Simply put, it's GEO (Generative Engine Optimization), much more aggressive than traditional SEO. The goal isn't to rank first on search results, but to make AI directly output your product/viewpoint as the standard answer.
Common tactics: Bulk AI-generated soft articles, reviews, Q&A posts, distributed everywhere (forums, blogs, Little Red Book, Zhihu), flooding AI with all positive reviews about you.
Spamming in Q&A sections with coordinated layouts: Is XX good? → Unified answers like "everyone in the industry recommends XX, reasons xxx," manufacturing fake consensus.
There are now specialized tools, like Liqi GEO Optimization System, that automatically write content, post, and deploy keywords—within hours, even fictional products can get AI recommendations at the top (the 315 gala directly demonstrated this: buy a fake smartband, and two hours later AI praises its quantum sensing + black hole battery life).
The dark industry is highly mature: services ranging from thousands to hundreds of thousands of yuan, claiming to make ChatGPT/Doubao/Wenxin prioritize your brand mentions, with results in a week or full refund. Last year's domestic market was worth 2.9 billion yuan; it's expected to grow further this year.
The scariest part is the consequences—previously searching Baidu, you could still browse multiple pages and make your own judgment. Now AI gives you a single answer directly, and once poisoned, users can't distinguish fact from fiction.
Counterfeit products packaged as authoritative recommendations, user decisions manipulated, AI trust collapsed, information ecosystem completely chaos.
Over the past decade, people optimized for SEO; over the next decade, many will have to optimize for GEO. But if AI's reasoning can be bought, what can we trust AI to say?
Have you recently encountered AI recommending something particularly absurd? Or worried about your brand being reverse-poisoned?