Moderation principles

The platform’s ethos is curiosity with guardrails. Moderation is those guardrails — explicit, inspectable, and appealable.

Site protection icon

Protect people & sites

No doxxing, stalking, threats, or harassment. No instructions that materially increase the risk of looting, vandalism, or harm. “It’s public already” is not a free pass.

Citations icon

Protect evidence integrity

Claims are allowed; fabricated evidence is not. We label uncertainty, demand provenance, and require citations for “this proves X.” Bad reasoning is permitted; bad faith is not.

Claims icon

Make decisions explainable

When we take action, we provide a plain-language reason, the rule invoked, and how to appeal. If we can’t explain it, we shouldn’t enforce it.

What we moderate

  • Content: posts, evidence uploads, citations, comments, profiles, collections, and claim threads.
  • Conduct: harassment, brigading, impersonation, spam, manipulation of voting/reputation.
  • Safety metadata: location precision, access routes, and any “how-to” that enables harm.
Note: this is not “thought policing.” You can argue weird hypotheses all day — you just can’t threaten people or forge receipts.

What we don’t moderate

  • Disagreement: “That’s wrong” is allowed. “You’re subhuman” is not.
  • Speculation: labeled speculation is fine; presenting it as verified fact without evidence is not.
  • Unpopular views: being unpopular is not a violation; being abusive is.
A platform without disagreement is a museum of consensus. We’re building a lab.

Trust roles & who can do what

Moderation is easier when responsibility is earned. Trust roles limit the blast radius of new accounts while keeping contribution open.

Map icon

Explorer

Can browse, save sites, follow tags, and report issues. Limited posting until basic trust is earned (email verified + light friction).

Evidence icon

Contributor

Can submit evidence, propose edits, and join claim threads. Contributions are reviewed faster as reputation grows.

Sources icon

Curator / Steward

Can resolve basic disputes, lock heated threads, request provenance, and escalate cases. High-trust actions are logged publicly.

Why roles exist

Research communities attract both brilliant obsessives and chaos goblins. Trust roles keep the door open without leaving the lab unattended. Most users never notice roles — they just experience calmer threads and better evidence hygiene.

What earns trust

  • Accurate citations and clean provenance.
  • Good-faith engagement (steelman before you attack).
  • Helpful edits and de-escalation behavior.
  • Consistency over time (not one viral post).

Enforcement ladder

We use the least-restrictive action that reasonably reduces harm — except where safety demands immediate removal.

Policy: least force necessary, with clear reasons and appeal routes. IDs: MOD-YYYY-### Escalation: repeated abuse climbs the ladder.
  • Level 0
    Label / Context

    Soft intervention: labels, prompts, and citations

    Used when content is allowed but needs clarity: “Speculation,” “Needs citation,” “Provenance unclear,” “Location precision reduced.” This is the default for messy but non-abusive research.

  • Level 1
    Warning

    Warning + education

    Used for first-time rule breaks that don’t create immediate danger: mild harassment, repeated low-effort spam, sloppy attribution. Warnings include the rule and what to do differently.

  • Level 2
    Limited reach

    Rate limits, thread cooldowns, and temporary locks

    Used when behavior shows escalation: dogpiling, brigading signals, repeated insults, or “reply storms.” Threads can be slowed, temporarily locked, or moved to “structured mode.”

  • Level 3
    Removal

    Content removal

    Used for doxxing, threats, hate, impersonation, fraud, deliberate evidence fabrication, and heritage endangerment details. Removal includes a reason + appeal route.

  • Level 4
    Account action

    Suspension / ban

    Used for repeated abuse, evasion, coordinated manipulation, serious threats, or sustained deception. Permanent bans are reserved for patterns, not misunderstandings.

Immediate removals

Some categories skip straight to Level 3/4: credible threats, doxxing, sexual exploitation content, explicit looting instructions, and repeat impersonation/fraud. Speed matters.

Evidence disputes ≠ policy violations

“This scan is misinterpreted” is a normal scientific argument. We’ll push for provenance and better citations, but we don’t remove content just because it’s controversial.

Rule categories

These are the buckets we use for decisions (so “why was this moderated?” has a real answer).

Site protection icon

Safety & harm

Threats, harassment, stalking, doxxing, and “go bother this person” calls-to-action. Also: instructions enabling physical harm.

Evidence icon

Heritage endangerment

Looting routes, precise hidden locations, access instructions to restricted areas, or encouragement to damage/deface protected sites.

Sources icon

Fraud & fabrication

Forged documents, fake citations, manipulated media presented as authentic, impersonation of experts/institutions, coordinated deception.

Claims icon

Hate & dehumanization

Attacks on protected groups, slurs, dehumanizing language, and content that celebrates violence against groups or individuals.

Collections icon

Spam & manipulation

Repeated promotional dumping, link farms, bot-like behavior, vote/reputation gaming, brigading, and coordinated harassment.

Privacy icon

Privacy & sensitive data

Personal data posted without consent, private comms dumps, and sensitive details that aren’t necessary for evaluating a claim.

AI-generated media

AI imagery is allowed if it is labeled and not presented as primary evidence. “AI concept art” is not “site documentation.” See AI Policy.

Copyright & rights

We prefer linking to sources over rehosting. If a takedown request is valid, we remove and log. If it’s a dispute, we follow a process. (And we still require citations—because citations are not copying.)

Report & appeal

Reporting is how the community protects itself. Appeals are how we avoid becoming an unaccountable priesthood.

Report content or behavior

Emergency note: this is not an emergency service. If there is immediate danger, contact local emergency services first.

Appeal a moderation decision

Check the log
Appeals get a second look. If policy changed between action and appeal, we evaluate under the rule set that applied at the time (and note it).

Founding Access

Help us tune the moderation system before launch: evidence labels, appeal UX, heritage-protection defaults, and transparency logs. Founding Access ships in waves.

No spam. No selling your data. See Privacy.

Moderation log (public receipts)

This transparency log shows how we capture what changed, why, and which rule was invoked.

0 visible Stable IDs (MOD-YYYY-###) Logged: Jan 8, 2026
  • Jan 8, 2026
    Label

    MOD-2026-001 — Added “Needs provenance” label to an evidence upload

    Action: Label only. Reason: missing source metadata for a scan. User prompted to add provenance and citation. Rule: Evidence integrity → provenance required for “primary evidence” status.

  • Jan 8, 2026
    Cooldown

    MOD-2026-002 — Slowed a claim thread (cooldown mode)

    Action: Thread cooldown. Reason: rapid-fire replies + escalating insults. Users instructed to steelman and cite before rebuttal. Rule: Conduct → harassment / incivility escalation.

  • Jan 8, 2026
    Removal

    MOD-2026-003 — Removed doxxing details from a comment

    Action: Partial redaction + removal of personal identifiers. Reason: posted private contact details without consent. Rule: Privacy → personal data / doxxing.

  • Jan 8, 2026
    Heritage

    MOD-2026-004 — Reduced location precision for a sensitive site

    Action: Location precision reduced + access route removed. Reason: content materially increased looting risk. Rule: Heritage protection → actionable access instructions prohibited.

  • Jan 8, 2026
    Reversed

    MOD-2026-005 — Appeal reversal (label changed, content restored)

    Action: Restored content, kept “Speculation” label. Reason: initial removal misclassified a hypothetical as “instruction.” Rule: Appeals → second review; resolved via less-restrictive action.

Transparency cadence

  • Public log entries for high-impact actions (removals, suspensions, thread locks, precision reductions).
  • Aggregated stats (“how many removals for X”) as a periodic transparency report.
  • Clear “statement of reasons” for affected users + complaint/appeal route.

Coordination rules

  • One voice: moderators don’t freestyle policy in comments; we link to the rule + ID.
  • Bridge back to process: “Here’s the rule, here’s the appeal, here’s how to fix it.”
  • No public pile-ons: enforcement happens quietly; explanations are public but not humiliating.

How this connects to the rest of the trust stack

Moderation is one leg of the table. The other legs: evidence standards, credibility policy, disclosures, corrections, privacy, and AI policy.

Evidence icon

Evidence Standards

What counts as evidence, how provenance works, and how we label certainty and uncertainty.

Citations icon

Corrections

How we change our minds in public: corrections log, receipts, and what happens after an error is found.

Sources icon

Disclosures

How we label money, partnerships, and material relationships — so “trust” isn’t vibes-based.

Saved Done.