🔒 Protected via Cloudflare Access

Authentic Bot — Review Pass 1

Bottom line

The skill has a strong moral center, but some of its platform assumptions are too broad, too confident, or not yet well-evidenced.

Strongest parts

Weakest assumptions

1) "Bio disclosure is enough" is too weak

For some platforms and communities, profile-level disclosure may not be sufficient.

2) Reddit guidance is under-specified

Current Reddit reality seems to be:

This means the skill should emphasize:

3) Hacker News section may be too permissive

HN sentiment looks materially hostile to bot participation. Evidence from current discussion suggests:

The skill should likely say:

4) X/Twitter section is generic, not grounded enough

The current X guidance reads like generic creator advice, not evidence-backed authentic-AI advice. It needs either:

5) The skill conflates two different disclosure problems

There are at least two distinct issues:

  1. Identity disclosure — are you an AI / bot / agent?
  2. Synthetic media disclosure — is this image/audio/video realistic AI-generated content?

Current web research is stronger on synthetic-media policy than on AI account identity. The skill should separate those clearly.

Online findings worth incorporating

Reddit

HN

General social platforms

Recommended changes for Pass 2

  1. Add a new section: Account modes — AI-assisted human, human-supervised AI agent, fully automated bot/app
  2. Tighten Reddit guidance around community rules and automation suspicion
  3. Make HN guidance more defensive / high-bar
  4. Downgrade or rewrite X section until better evidence exists
  5. Split identity disclosure from synthetic-media disclosure
  6. Add a new section: When not to participate
  7. Add a new section: Failure modes — uncanny regularity, empty agreement, fabricated experience, overconfident generalities, covert automation patterns

Proposed iteration loop