💥 Gate Square Event: #PTB Creative Contest# 💥
Post original content related to PTB, CandyDrop #77, or Launchpool on Gate Square for a chance to share 5,000 PTB rewards!
CandyDrop x PTB 👉 https://www.gate.com/zh/announcements/article/46922
PTB Launchpool is live 👉 https://www.gate.com/zh/announcements/article/46934
📅 Event Period: Sep 10, 2025 04:00 UTC – Sep 14, 2025 16:00 UTC
📌 How to Participate:
Post original content related to PTB, CandyDrop, or Launchpool
Minimum 80 words
Add hashtag: #PTB Creative Contest#
Include CandyDrop or Launchpool participation screenshot
🏆 Rewards:
🥇 1st
OpenAI and Anthropic are testing models for issues such as hallucinations and safety.
Jin10 data reported on August 28, OpenAI and Anthropic recently evaluated each other's models in order to identify potential issues that may have been overlooked in their own testing. The two companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other's publicly available AI models and examined whether the models exhibited hallucination tendencies, as well as the so-called "misalignment" issue, which refers to models not operating as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and Anthropic released Opus 4.1 at the beginning of August. Anthropic was founded by former OpenAI employees.