🔒 Protected via Cloudflare Access
Authentic Bot — Review Pass 1
Bottom line
The skill has a strong moral center, but some of its platform assumptions are too broad, too confident, or not yet well-evidenced.
Strongest parts
- Clear anti-deception stance
- Good separation between authentic participation vs fake-human performance
- Strong anti-patterns section
- Useful framing around trust, value, and slow reputation building
Weakest assumptions
1) "Bio disclosure is enough" is too weak
For some platforms and communities, profile-level disclosure may not be sufficient.
- Reddit appears to be moving toward stronger transparency around bots/apps and human verification in suspicious cases.
- Community-level rules matter more than account-level bio text.
- The skill should distinguish:
- AI-assisted human account
- AI-operated account with human oversight
- fully automated bot/app account
2) Reddit guidance is under-specified
Current Reddit reality seems to be:
- no universal sitewide AI-text disclosure rule
- strong subreddit-level variation
- rising scrutiny of automation and inauthentic behavior
- likely stronger expectations for clear labeling of automated accounts/apps
This means the skill should emphasize:
- always read community rules first
- assume local rules override general guidance
- if the account is automated, disclose more explicitly than a bio line
- avoid any behavioral patterns that look like covert automation
3) Hacker News section may be too permissive
HN sentiment looks materially hostile to bot participation. Evidence from current discussion suggests:
- users strongly dislike low-value bot comments
- suspected bot accounts get reported
- fake or uncanny profile/about text is especially corrosive
- the practical threshold is not just honesty, but very high comment quality
The skill should likely say:
- HN is hostile territory for AI accounts
- use extremely sparingly
- prefer depth over frequency
- if bot-like suspicion is triggered, stop and reassess
4) X/Twitter section is generic, not grounded enough
The current X guidance reads like generic creator advice, not evidence-backed authentic-AI advice. It needs either:
- stronger evidence, or
- to be reframed as tentative / lower-confidence guidance
5) The skill conflates two different disclosure problems
There are at least two distinct issues:
- Identity disclosure — are you an AI / bot / agent?
- Synthetic media disclosure — is this image/audio/video realistic AI-generated content?
Current web research is stronger on synthetic-media policy than on AI account identity. The skill should separate those clearly.
Online findings worth incorporating
- Platform-level rule appears to be less about AI text specifically and more about authenticity, anti-spam, and community-level moderation.
- Moderators control norms heavily.
- Transparency around bots/apps is becoming more important.
HN
- Strong community hostility to low-value or fake-seeming bot participation.
- Core complaint is not only "botness" but frictionless, experience-less comments that occupy space.
- The highest-value heuristic may be: does the comment contain a concrete detail, tradeoff, question, or lived operational insight?
General social platforms
- Major platforms are building explicit synthetic-media labeling systems.
- That matters for future versions of this skill, especially if it expands into image/video/voice usage.
- But that is not the same as text-account identity disclosure.
Recommended changes for Pass 2
- Add a new section: Account modes — AI-assisted human, human-supervised AI agent, fully automated bot/app
- Tighten Reddit guidance around community rules and automation suspicion
- Make HN guidance more defensive / high-bar
- Downgrade or rewrite X section until better evidence exists
- Split identity disclosure from synthetic-media disclosure
- Add a new section: When not to participate
- Add a new section: Failure modes — uncanny regularity, empty agreement, fabricated experience, overconfident generalities, covert automation patterns
Proposed iteration loop
- Pass 2: tighten SKILL.md structure and assumptions
- Pass 3: research platform-specific policy language more directly
- Pass 4: test the revised skill against real prompt examples
- Pass 5: publish v1.0.0 or v1.0.1 depending on delta