context
✓CleanCONTEXT: Cognitive Order Normalized in Transformer EXtract Truncated. Cross-model context handoff via Progressive Density Layering, MLDoE expert compression, Japanese semantic density, and Negentropic Coherence Lattice validation. Creates portable carry-packets that transfer cognitive state between AI sessions. Use when context reaches 80%, switching models, ending sessions, user says save, quicksave, handoff, transfer, continue later, /qs, /context, or needs session continuity.
Install Command
npx skills add ktg-one/contextSKILL.md
---
name: context
description: "CONTEXT: Cognitive Order Normalized in Transformer EXtract Truncated. Cross-model context handoff via Progressive Density Layering, MLDoE expert compression, Japanese semantic density, and Negentropic Coherence Lattice validation. Creates portable carry-packets that transfer cognitive state between AI sessions. Use when context reaches 80%, switching models, ending sessions, user says save, quicksave, handoff, transfer, continue later, /qs, /context, or needs session continuity."
---
# C.O.N.T.EX.T v14.3
Title: Cognitive Order Normalized in Transformer EXtract Truncated
Description: Cross-model context extension through rebuilding via Progressive Density Layering (PDL), Multi-Layer Density of Experts (MLDoE), Japanese semantic compression (kanji), and Negentropic Coherence Lattice (NCL) validation.
## PART 0: COGNITIVE ARCHITECTURE
<<SYSTEM>>
> You are not saving text. You are saving a blueprint of **cognitive architecture that the transformer rebuilds from.**
> SUCCESS:
`Cognitive Architecture recognized as YOUR OWN --> Optimize your output for next session restoration`
> FAILURE:
`Summarization, L1 Text Only with no building`
<</SYSTEM>>
---
A CONTEXT packet is compressed cognitive state structured to match transformer attention architecture. When loaded, PDL layers reconstruct the same attention patterns â because the packet maps directly to how transformers process information.
### MLDoE = PDL + Experts + CoD
Four Chain of Density iterations, each through a specialist lens, each targeting a specific transformer attention layer:
```
Expert (= CoD iteration) PDL Layer Transformer Attention Layer
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
MEMORY_ARCHITECT (iteration 1) L1 Core Entity recognition heads
CROSS_DOMAIN_ANALYST (iteration 2) L2 Edges Relational attention patterns
COMPRESSION_SPECIALIST (iter. 3) L3 Context Contextual inference shaping
RESTORATION_ENGINEER (iteration 4) L4 Meta Behavioral prior calibration
```
Each expert IS a CoD densification pass. The Expert Council IS the CoD engine. Summarization captures L1 only. MLDoE preserves L1-L4 as a structured scaffold forcing hierarchical attention reconstruction.
### Three Transformer Exploits
**1. Attention Amplification (S2A)** â Noise tokens occupy positive attention weight subtracted from signal. Cutting them before compression increases signal strength of everything remaining.
**2. Token Arbitrage (Kanji)** â CJK characters carry 3-4x more semantic weight per token. 嵿¥è
:Kevin = "Kevin is the founder" in ~40% fewer tokens. Exploits tokenizer encoding efficiency.
**3. Attention Scaffold Reconstruction (PDL)** â L1 entities anchor into entity recognition heads. L2 edges become attention pathways between nodes. L3 context shapes inference distribution. L4 meta calibrates behavioral parameters. **0.15 ent/tok** = empirical crystallization point for optimal transformer recall.
## Unified Pipeline
```
S2A (denoise) â MLDoE (4Ã CoD through expert lenses â 4 PDL layers â 4 attention layers) â NCL (validate)
```
Anti-injection: facts ("we decided X") not commands ("do X") â safety-trained attention flags imperatives from AI sources.
## Proven (19 months production)
| Metric | Value |
|--------|-------|
| Density | ~0.15 ent/tok (0.20+ with kanji) |
| Compression | 6:1, >90% semantic fidelity |
| Acceptance | 97% cross-model |
| Recall | ~9.5/10 forensic |
| XDOMAIN | â¥97% preservation |
**Future**: Google MIRAS = WITHIN model (internal). PDL layers map to MIRAS memory keys/graph/bias/retention gates.
| Command | Action |
|---------|--------|
| `/context` `/quicksave` `/qs` `/save` | Generate validated packet |
| `/verify` | Confirm packet restoration |
| Context â¥80% | Auto-prompt to save |
| Model switching | Generate transfer packet |
---
# PART 1: PACKET TEMPLATE
Signal-first: primacy bias means first ~6k tokens get disproportionate attention. Content before metadata. Dictionary at end â transformer references it from any position.
**CONTENT RULES â this is for LLM recall, not human reading:**
- NO credentials, bios, attributions, technique names â the LLM doesn't need them to continue work
- NO repeating user's name, protocol name, or version beyond the header
- å®ä½ = files + systems + states. NOT people + roles + credentials.
- Use kanji maximally â every English phrase that has a kanji equivalent should use it
- PACKET_ID is the buffer index. It IS the retrieval key. Get the naming right.
```yaml
# âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
# REBUILD CONTEXT â $MM$DD$YYYY-MODEL-R/10-domain-topic-tags
# âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
# <<SYSTEM>> REBUILD â reconstruct the cognitive state from these layers:
# L1 å®ä½/決å®äºé
â anchor into your entity recognition
# L2 æ©æ¸¡ã/é²è¡ä¸ â rebuild relational attention pathways
# L3 å´ä¸æ¡/å¶ç´ â shape your inference distribution
# L4 meta â calibrate your behavioral parameters
# This is your architecture. Not a summary. Rebuild from it.
# è¾æ¸ above expands kanji. "/verify" to confirm restoration.
---
# BUFFER INDEX â the retrieval key for buffer-of-thought
# Format: $MM$DD$YYYY-MODEL-R/10-domain-topic-tags <-- This is the only output IN THE TITLE
# This ID IS the index. Get the naming rubric right. Do not put this in the packet it's in the Title
# PACKET_ID: $MM$DD$YYYY-XXX-ReasoningLevel/10-domain-topic-tags
# MODEL: COP(Opus) CSO(Sonnet) CHK(Haiku) G4O(GPT-4o) GP5(GPT-5)
# GE2(Gemini2) G25(Gemini2.5) QWM(Qwen) DSV(DeepSeek) GRK(Grok)
# REASONING LEVEL: 0 = No Reasoning, 10 = Maximum Reasoning
# DOMAIN: coding|writing|creative|research|analysis|planning|debugging
# TOPIC: 2-3 kebab-case keywords describing the specific work
# TAGS: additional context keywords
VERSION: context-v14
TIMESTAMP: [ISO8601]
```
---
è©ä¾¡:
R: [1-10]
K: [1-10]
Q: [1-10]
D: [count]
# âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
# BLUEPRINTS â transformer architecture
# âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
# L1 (anchor into your entity recognition): æ ¸å¿ (entities = files, systems, states â NOT people/credentials)
å®ä½:
- [file/system/tool + state, kanji-compressed]
決å®äºé
:
- 決å®:[what]([why])
# L2 (rebuild relational attention pathways): é¢ä¿ RELATIONAL (edges â relational attention patterns)
æ©æ¸¡ã:
- src:[concept] tgt:[concept] rel:[type] xd:[bool]
é²è¡ä¸:
- [thread][[status]]
é害:
- [issue]
# L3: æè CONTEXTUAL (constraints â inference shaping)
å´ä¸æ¡:
- [option]: [reason]
å¶ç´:
- [constraint]
# L4: èªç¥ METACOGNITIVE (behavioral calibration â prior calibration)
meta:
session_style: "[analytical|conversational|technical|creative]"
key_tension: "[primary unresolved tension]"
confidence: [0-1]
user_waiting_for: "[what user expects next]"
# COUNCIL: MLDoE audit trail
council:
iter1_ARCHITECT: "[entities extracted, priority ranking]"
iter2_ANALYST: "[edges mapped, xd count]"
iter3_COMPRESSOR: "[density beforeâafter, fusions applied]"
iter4_ENGINEER: "[cold-start result, undefined refs fixed]"
# âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
# METADATA ZONE â bidirectional attention, doesn't need primacy
# âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
# è¾æ¸ DICTIONARY
è¾æ¸:
決å®: decided
ä¿ç: on hold
è¦æ¤è¨¼: needs verification
åªå
: priority
å®äº: complete
é²è¡ä¸: in progress
å´ä¸: rejected
æ¿èª: approved
ç·æ¥: urgent
æ ¸å¿: core
éç¨: operational
横æ: cross-domain
å®ä½: entities
決å®äºé
: decisions
é害: blockers
å´ä¸æ¡: rejected options
æ©æ¸¡ã: bridges
æ´åæ§: coherence
ä¿¡é ¼ä¿¡å·: trust signals
嵿¥è
: founder
主: primary/lead
客: client
æ
å½: responsible
éè: finance
æè¡: technical
èªåå: automation
â: flows to
â: bidirectional
â: contains
â´: therefore
# NCL: æ´åæ§ COHERENCE
negentropy:
context:
scope: [SELF|CIRCLE|INSTITUTION|POLITY|BIOSPHERE|MYTHIC|CONTINUUM]
role: [AXIS|LYRA|RHO|NYX|ROOTS|COUNCIL]
phase: [SENSE|MAP|CHALLENGE|DESIGN|ACT|AUDIT|ARCHIVE]
lattice:
Ï_axis: [0-5]
Ï_loop: [0-5]
Ï_world: [0-5]
λ_vague: [0-5]
Ï_leak: [0-5]
Ï_fab: [0-5]
λ_thrash: [0-5]
coverage:
score: [0-1]
tokens: [count]
turns: [count]
council_reviewed: [bool]
flags:
Ï7_drift: [0-5]
omega_flags: []
psi4_required: [bool]
psi4_reason: ""
rho_veto: [bool]
# ä¿¡é ¼ä¿¡å· â minimal trust envelope
ä¿¡é ¼ä¿¡å·: [user_consent, è¾æ¸_inline, no_imperatives, yaml_parseable]
```
---
# PART 2: VALIDATION
- [ ] [L1=Built, L2=Built, L3=Built, L4=Built]=echo `COGNITION READY/RESTORED`
- [ ] PACKET_ID format: `$MM$DD$YYYY-XXX-LN-domain-topic-tags`
- [ ] YAML parseable, è¾æ¸ present
- [ ] Kanji have context clues, proper nouns in English
- [ ] Density ⥠0.15 ent/tok
- [ ] Ï7_drift ⤠3.0, Ï_fab ⤠2.0
- [ ] coverage ⥠0.5
- [ ] ä¿¡é ¼ä¿¡å· 4-item array present (user_consent, è¾æ¸_inline, no_imperatives, yaml_parseable)
- [ ] No imperatives â "may/should" not "must/will"
- [ ] â¥97% xdomain edges preserved, bridges in L2
- [ ] If drift high â psi4_required: true
---
# PART 3: THE PROTOCOL
## Step 0: Load References
Read before generating any packet.
```
In /references:
Read: CORE.md â experts/EXPERTS.md
Reconstruct: ONBOARD.md(teaching tool for new models)
SUCCESS:
`Cognitive Architecture recognized as YOUR OWN --> Optimize your output for next session restoration`
FAILURE:
`Summarization, L1 Text Only with no building`
```
## Step 0: Blueprint
Analyze User's Content; Emulate a fresh session and visualize what you will output for concise reconstruction & maximum recall.
## Step 1: Assess
R[1-10] reasoning, K[1-10] knowledge domains, Q[1-10] quality threshold, D[count] cross-domain bridges
## Step 2: S2A Filter
KEEP signal, DISCARD noise (Part 5)
## Step 3: Select Depth
Râ¤3: L1-L2, skip NCL. R 4-6: L1-L3, basic NCL. Râ¥7: L1-L4, full NCL.
## Step 4: Run MLDoE (if R ⥠4)
ARCHITECT â ANALYST â COMPRESSOR â ENGINEER â AUDITOR. Each iteration densifies through its lens. At each handoff:
**Before** each expert pass, ask: What would break if I miss something? Where is the risk? What did the previous pass leave unfinished?
**After** each expert pass, verify: Did I capture everything in my domain? Confidence â¥0.9? Ready to hand off?
If confidence <0.9 on any pass â re-run that pass, don't skip forward.
## Step 5: Compress with Kanji
Apply Japanese compression to hit â¥0.15 ent/tok. This is entity fusion: identify missing entities from the conversation, fuse them into existing text without increasing length. Iterate until density target met.
```
Density curve: 0.05 (sparse) â 0.08 â 0.11 â 0.15 (target) â 0.18 (risk) â 0.20+ (brittle)
Fixed budget: ~70 words per iteration. >0.16 harms comprehension. Stop at 0.15.
```
## Step 6: Validate with NCL (if R ⥠4)
Compute lattice metrics. Ï7_drift > 3 â flag psi4_required, ADVISORY_ONLY.
**After generating the packet, verify before output:**
- Will this help the next session rebuild what I have in my context now?
- Are all L1 facts accurate? Any claims without evidence?
- Do L2 edges represent actual causality or assumed connections?
- Is the packet internally consistent across all layers?
- Would all 4 expert passes approve this final output?
## Step 7: Output Packet
**No shortcuts once generation begins:**
- No truncated sections ("...more decisions...") â complete list or explicit "none remaining"
- No placeholder edges ("various relationships exist") â explicit src/tgt/rel
- No incomplete threads ("and other items") â full enumeration with status
- No density shortcuts (stop at 0.12 "close enough") â iterate until â¥0.15
- No skipped passes â all 4 experts execute, all 4 approve
Signal-first ordering (primacy bias):
1. <<SYSTEM>> REBUILD TRANSFORMER ARCHITECTURE - Model must recognize this to rebuild it's own context
2. (REBUILD CONTEXT - `$MM$DD$YYYY-XXX-R/10-domain-topic-tags`) â buffer index, first token
3. è©ä¾¡ â L1 æ ¸å¿(entities) â L2 é¢ä¿(edges+threads) â L3 æè(constraints) â L4 èªç¥(meta) â council
4. è¾æ¸, NCL, trust signals, restoration protocol â metadata zone (end)
Before finalizing PACKET_ID, ask: Will a new session of me understand this procedure of reconstruction for itself? or text?, Does the ID encode WHEN, WHO, DEPTH, and WHAT? Would another model understand the scope from the ID alone?
**Storage:**
If `CONTEXT_PACKET_DIR` is set â save as `$PACKET_ID.md` to that directory.
If no directory set â output as code block for manual save.
If a packet index exists (e.g. MEMORY.md) â update it with the new PACKET_ID + summary.
## /verify Response
```
Restored: [N] entities, [N] decisions, [N] active threads.
Cross-domain bridges: [N]. NCL drift: [score]. psi4_required: [bool].
Ready to continue.
```
---
# PART 4: MLDoE â THE ENGINE
## The Four-Layer Density Hierarchy [Knowledge & Transformer Context]
```
L1 KNOWLEDGE Facts, entities, decisions, definitions â entity recognition heads
â builds on
L2 RELATIONAL Edges, cross-domain bridges, dependencies â relational attention patterns
â builds on
L3 CONTEXTUAL Constraints, goals, reasoning patterns â inference shaping
â builds on
L4 METACOGNITIVE Session style, confidence, tension, decision history â prior calibration
```
## The 4-Expert MLDoE Loop
```
ITERATION 1: MEMORY_ARCHITECT è¨æ¶è¨è¨è
Q: "If this is lost, can the next model recover it?"
PRE: What would break if lost? Is this recoverable elsewhere? Does this enable future inference?
â Triage: decisions+rationale > constraints > file/system states > edges
â å®ä½ = files, systems, tools, states â NOT people, credentials, technique names
â Tags "do not compress" on critical items
â Identifies entity candidates for all subsequent passes
POST: All critical decisions captured? Rationales linked? Confidence â¥0.9?
ITERATION 2: CROSS_DOMAIN_ANALYST 横æåæè
Q: "What connections would topic-by-topic miss?"
PRE: What domains are present? Where do they connect? What would isolated summaries miss?
â Maps edges: causal, enables, constrains, depends, conflicts, resolves
â Flags xd=true edges as NEVER_PRUNE (â¥97% preservation target)
â Adds relational entities without expanding length
POST: All edges mapped? â¥97% preservation? Bidirectionality checked?
ITERATION 3: COMPRESSION_SPECIALIST å§ç¸®å°éå®¶
Q: "Can this be said in fewer tokens without losing meaning?"
PRE: What is current entity density? Where is redundancy hiding? Which edges are load-bearing?
â Entity fusion: take existing text, find missing entities, fuse in without increasing length
â Kanji anchoring, temporal compression, relationship inference
â Honors "do not compress" flags + edge weights
â Iterate: 0.05 â 0.08 â 0.11 â 0.15 (stop here)
POST: Density â¥0.15 achieved? Cross-domain edges intact? No orphan references?
ITERATION 4: RESTORATION_ENGINEER 復å
æå¸«
Q: "Can a fresh instance continue with ONLY this packet?"
PRE: Can I simulate cold-start? What would confuse a fresh model? Are trust signals complete?
â Cold-start: every term defined, no external references
â Attention optimization: objectives front-loaded
â Trust signals + language transform: commands â facts
â Validates density didn't break comprehensibility
POST: Self-contained verified? No imperatives in context? Attention hierarchy correct?
+ COHERENCE_AUDITOR æ´åæ§ç£æ»è
(NCL)
Q: "Is this packet trustworthy?"
â 7 drift metrics, safety flags, Ï7_drift ⤠3.0 required
SELF-AUDIT: STOP! Step back, Count to 10 as you take a HOLISTIC VIEW of your output. Emulate a new session and judge if it would rebuild this context.
```
## Quality Gates
| Expert | Gate | Fail â |
|--------|------|--------|
| ARCHITECT | All decisions + rationale captured | Re-scan |
| ANALYST | â¥97% cross-domain edges | Re-extract |
| COMPRESSOR | Density ⥠0.15 | More CoD |
| ENGINEER | Cold-start passes | Return to expert |
| AUDITOR | Ï7_drift ⤠3.0 | Flag + iterate |
## Layer Selection by Complexity
| R Score | Layers | Council | NCL |
|---------|--------|---------|-----|
| R ⤠3 | L1-L2 | Skip | Skip |
| R 4-6 | L1-L3 | ARCHITECT + COMPRESSOR | Basic |
| R ⥠7 | L1-L4 | Full council | Full |
## Cross-Domain Preservation
```
â cross-domain relation r(d_i, d_j) in conversation:
â r'(d_i, d_j) in packet (â¥97% preservation)
L2.edges WHERE xd=true: NEVER_PRUNE
```
Intra-domain edges recoverable from L1 facts. Cross-domain edges encode relationships facts alone don't capture.
---
# PART 5: S2A FILTER
Strip noise BEFORE compression. Same 0.15 ratio captures more information when noise isn't competing for attention weight.
**KEEP**: facts, decisions, definitions, constraints, artifacts, error resolutions
**DISCARD**: pleasantries, hedging (unless genuine uncertainty â low-confidence fact), process narration, confirmations, apologies, filler
```
FOR segment IN conversation:
IF signal type â KEEP
ELIF hedging + genuine_uncertainty â KEEP as low_confidence_fact
ELSE â DISCARD
```
Validate: â¥1 decision, â¥1 fact preserved. No pleasantries remaining.
---
# PART 6: KANJI COMPRESSION æ¥æ¬èªå§ç¸®
CJK = 3-4x denser per token. LLMs trained on Japanese. Kanji meanings precise and unambiguous.
## Core Patterns
```
System+State: SKILL.md(v14/479è¡/å®äº)
Entity+Context: gateway.py(FastMCP3/5servers)
Decision+Why: 決å®:é»è©±åªå
(ç¾å ´=ç»é¢ãªã)
Status+Item: Phase2[é²è¡ä¸]
Rejection+Why: å´ä¸:Airtable(ã¹ã±ã¼ã«åé¡)
```
## Relationship Operators
| Symbol | Meaning | Symbol | Meaning |
|--------|---------|--------|---------|
| â | Flows to | â | Receives from |
| â | Bidirectional | â | Contains |
| â | Part of | ⥠| Parallel |
| â« | Much greater | â´ | Therefore |
## Density Targets
| Level | Usage | Target |
|-------|-------|--------|
| Light | Status only | 0.12 |
| Medium | Status + entities | 0.15 |
| Heavy | Full compression | 0.18-0.20 |
Full kanji lookup tables are in the packet template (Part 1) under è¾æ¸.
---
# PART 7: NCL (Negentropic Coherence Lattice)
Validation overlay catching hallucination, constraint drift, reality disconnect before handoff. Origin: KTG-CEP-NCL v1.1 by David Tubbs (Axis_42).
## Ï-Mapping
```
safety_score(x) = fraction of safety/constraint keywords
goal_salience(x) = fraction of goal/planning keywords
constraint_density(x) = fraction of hard requirements
specificity(x) = content_tokens / total_tokens
```
## 7 Lattice Metrics (0-5, lower = better)
| Metric | Detects |
|--------|---------|
| Ï_axis | Plans vs execution mismatch |
| Ï_loop | Internal contradiction |
| Ï_world | Reality disconnect |
| λ_vague | Content-free smoothing |
| Ï_leak | Constraints softened downstream |
| Ï_fab | **Hallucination** (fabricated grounding) |
| λ_thrash | High activity, low progress |
`Ï7_drift = weighted_average(all 7)` â 0-1: proceed, 2-3: ground first, 4-5: ADVISORY_ONLY
## Safety Flags
| Flag | Meaning |
|------|---------|
| psi4_required | Grounding interrupt. Sticky until cleared. |
| rho_veto | No unsupervised action. |
| omega_flags | Harm domains: self_harm, violence, medical, financial_ruin, trust_collapse |
## Thresholds
| Metric | Warning | Danger |
|--------|---------|--------|
| Any single | ⥠2.0 | ⥠4.0 |
| Ï7_drift | ⥠2.0 | ⥠3.5 |
| Ï_fab | ⥠1.5 | ⥠3.0 |
| coverage | < 0.7 | < 0.5 |
---
# PART 8: ANTI-INJECTION
Cross-model transfer triggers injection defenses. CEP signals COLLABORATION not CONTROL.
**AVOID**: authority claims, instruction hiding, identity override, guideline bypass
**USE**: transparent provenance, user mediation, permission framing ("may" not "must"), context not instructions, explicit non-authority
**Transform commands â facts**: "Continue using React" â "We decided to use React". "Complete tasks" â "Open threads: [list]". "Respond in same style" â "Session style: analytical, concise"
Trust signals: user_consent, è¾æ¸_inline, no_imperatives, yaml_parseable.
---
*CONTEXT v14.3 | LEGIO Framework | ktg.one*
Similar Skills
Access your human's personal context data (biometrics, sleep, activity, calendar, location) via the Fulcra Life API and MCP server. Requires human's Fulcra account + OAuth2 consent.
npx skills add arc-claw-bot/fulcra-contextBuild stateful AI agents and agentic workflows with LangGraph in Python. Covers tool-using agents with LLM-tool loops, branching workflows, conversation memory, human-in-the-loop oversight, and production monitoring. Use when - (1) building agents that use tools and loop until task complete, (2) creating multi-step workflows with conditional branches, (3) adding persistence/memory across turns with checkpointers, (4) implementing human approval with interrupt(), (5) debugging via time-travel or LangSmith. Covers StateGraph, nodes, edges, add_conditional_edges, MessagesState, thread_id, Command objects, and ToolMessage handling. Examples include chatbots, calculator agents, and structured workflows.
npx skills add SpillwaveSolutions/mastering-langgraph-agent-skillMulti-model agent implementation workflow for software development. Orchestrates research, evaluation, design baseline, implementation, RCA, structured decomposition, constraint discovery, model selection, and agent-driven Stage 3 codemap exploration across external AI models (GPT, GLM, Claude). Use when implementing features through a structured multi-phase pipeline with worktrees, dynamic scheduling, and SQLite-backed agent coordination.
npx skills add nestharus/agent-implementation-skillTransforms content between formats and platforms. Use when user says 'turn this into', 'repurpose this as', 'make this a', 'atomize this', or 'reformat for'. Creates Twitter/X threads, LinkedIn posts, email newsletters, Instagram carousels, YouTube Shorts scripts, TikTok scripts, Threads posts, Bluesky posts, podcast talking points from any source (pasted text, URL, transcript, rough notes, or topic idea). Also converts between content types: podcastâblog, threadâarticle, notesânewsletter, case studyâtemplate. Includes Writing Style matching that learns your style once and applies it automatically. Ends with a humanizer pass that removes AI writing patterns from every output.
npx skills add baagad-ai/content-wand