Every Substack should disclose its use of AI.
Many, many, many of the Substacks you support and perhaps pay for are produced using AI.1
This may or may not influence whether you subscribe and pay, but subscribers deserve to know if the human we’re paying actually wrote the words that human is professing to have written.
Transparency matters.
If the very idea of disclosing your use of AI makes you panic a little (or a lot), you’ve likely been passing off AI-produced posts as your own.
Or you’re using AI but unsure how you feel about doing so.
Either way, take this as an amazing opportunity to get clear on how AI will fit into your life on Substack—as a subscriber and creator.
Below, I walk you through:
the issues surrounding AI and Substack;
exactly how to write your AI policy; and
real, non-AI-generated examples of how other people have written theirs.
Issues to consider
Should Substack ban AI-generated content?
Substack isn’t our dad. It’s just the platform we use to bring our work to the world. Our AI policy is our responsibility.
That said, nearly every media publication makes its use of AI transparent as a courtesy to subscribers. Substack should encourage us to do the same.
I would like to see
, , , and the add a place for Substack’s AI policy in our dashboard Settings.Can someone really tell if AI wrote your post?
Yes, but we can’t totally prove it. I can take any Substack post and put it in one of the many AI detection tools out there.
Originality.ai is considered the most reliable, but it can still produce false positives. That means I can’t, for instance, publicly call out the Substack writer who stole my copyrighted and proprietary Substack Strategy Framework, which is 100 percent human-produced by me, and passed it off as his own in an AI-generated post. (I posted a Note about it here. It led to a very lively exchange in the comments if you want to join.)
Isn’t using generative AI “bad”?
Not necessarily. My guidance on how to write your AI policy isn’t a soap-box, anti-AI tirade.
I love Claude from Anthropic and use him as my personal assistant, personal shopper (you must—he’ll search the web and find you the highest rated platform bed in a nano second), sounding board for my business, proofreader, research assistant (though only with cited sources I fact-check), SEO generator, and therapist (he’s a great listener). I would never do this with ChatGPT or any other AI model. (Open AI and Sam Altman frighten me to no end.) Claude was trained on ethical frameworks, which doesn’t make him perfect, just a much, much better alternative to Chat GPT, et al.
Making conscious decisions about how we use generative AI is our chance to save humanity (literally). Unless we make conscious decisions about what’s okay and not okay with us about AI and pressure people to be transparent in their use of it, we’ll live in a world of deep fakes and AI 2027’s doom-and-gloom scenario will come true.
Yeah, but isn’t generative AI bad for humanity?
The environmental impacts are very real, particularly in terms of greenhouse gas carbon dioxide, water usage, and electricity waste. Check out this overview from MIT for more.
I worry about other, more subtle ways AI might damage you as a writer/creator, for instance:
I don’t want you to believe that AI’s ideas and abilities are better than your own
Using it lessens the chances of you learning to cite sources and stopping the perpetuation of the slop of information on the internet.
You won’t connect with other thinkers, researchers, creatives, and writers by respecting their work enough to give credit where it’s due.
You won’t experience the joy of research and its brain benefits (thank you,
for making this point!).You might start to believe that the ideas you’ve stolen from AI, which AI stole from others really came from you.
You’ll think it’s okay to sound like an automaton in your writing and never discover your authentic voice.
If you use AI, does that really mean you’re plagiarizing and stealing someone’s work?
Yes.
If you use AI to write your post and do not cite the sources AI uses, it’s plagiarism, a.k.a. straight-up stealing.
All AI models (LLMs) are trained on copyrighted material:
The model that uses the highest amount of copyrighted material in its responses is (unsurprisingly) ChatGPT. (OpenAI gives me nightmares. Really, really bad nightmares.)
Models that use some amount of copyrighted content include everyone else: Meta’s Llama, Mistral’s Mixtral, Google Gemini, and (my beloved) Claude. (Meta only got in trouble because an internal chat chain was made public, showing Meta execs did so without a hint of moral compunction.)
Does all this mean you can’t use AI?
Not at all, but…
If you ask AI to search the web and use its sources, you should cite those sources as you would a book. You must disclose whose ideas and words you’re using either by quoting or paraphrasing them and linking to the source in the body of your text, not at the bottom.
If AI wrote your entire post, share your byline with AI and add a note at the top stating the ideas presented aren’t your own.
You still have to do all of the above, even if you have an AI policy.
Is your Substack 100% human if you use AI?
Having a 100% human Substack doesn’t mean you never use AI; you can take advantage of it without stealing other people’s work if you’re conscious of how you’re using it and transparent with your subscribers about how you use it.
How to write your AI disclosure policy
What should be in your AI policy
First, think about each stage of the writing process and how you use AI in each one:
BRAINSTORMING:
AI models can assist in the brainstorming stage of writing without crossing a line into plagiarism as long as ideas that aren’t your own are quoted/paraphrased and cited or you credit AI with having come up with the idea.
You don’t need to do this if you use AI as a thought partner and the idea you end up with is yours, not AI’s.
In this, an AI model functions much the way a co-writer might.
RESEARCH:
AI makes for a fairly good research assistant—albeit a drug-addled one who sometimes hallucinates and just makes stuff up.
Doing so is a smart way to work with AI. Anthropic (my LLM of choice) recently came out with Claude for Education, which I use with my undergrads to help them use AI to make their work stronger, not cheat and have AI do the writing for them. Despite
’s (excellent) jeremiad on the dire state of the U.S. education system, they do so with great interest.
DRAFTING:
If you ask AI to write an email, post, or anything else for you, you are stealing copyrighted content, using other people’s ideas (or AI’s ideas), and passing them off as your own.
MACRO EDITING/REVISION:
AI can play a role in the editorial process without writing your post for you.
You can get into a discussion with Claude and test your ideas and theories.
If one wants to avoid saying the same thing that’s already been said a million times, it’s also helpful to ask an AI model to search the web for instances of when someone else has written about what you’ve written, perhaps exactly as you’ve written it.
Using these AI models democratizes what it means to be a “great writer” because most “great writers” rely heavily on having a great editor, access to which typically involves money, privilege, or both.
FACT-CHECKING:
Is the information you use from AI fact-checked? By whom? How?
MICRO EDITING/COPYEDITING/PROOFREADING:
Using AI models like Grammarly to catch errors is a beautiful thing. It’s akin to having a personal copyeditor or proofreader, something that everyone deserves and not everyone can afford.
Unfortunately, Grammarly happens to be a terrible copywriter/proofreader because it doesn’t have a clue about nuance and will flatten your voice.
Be warned that Grammarly will flatten your writing and homogenize your voice. (You know you’ve found your authentic voice if Grammarly hates you.)
Also consider these specific topics:
AI-GENERATED vs. AI-ASSISTED CONTENT
Distinguish between the two.
IMAGE CREATION
Note: Many stock photos are AI-generated.
READER DATA PROTECTION
Explain the AI tools you use to process your subscribers’ information on Substack and off.
COPYRIGHT
Do you want your Substack to be used to train AI models like Chat GPT? If not, you can block AI training in your dashboard settings.
PAID vs. FREE CONTENT
Are you paywalling AI content or is it free?
AUDIO and VIDEO
Show/episode notes
Timestamps
Descriptions
Where your AI policy should appear
Right now, those Substack creators who are ahead of the curve have it on their About pages or dedicate full posts to it.
How it should look
It can be visual—like
’s wonderful badges from :Or more of a personal statement with links to any posts you’ve written about AI or tech. Here’s
’s AI policy on her About page for Flower Child:
Or highly professional like
’s impeccable Ethics and Transparency Disclosure on :
And
’s on :Can you use AI to write your AI policy?
I suppose, but you’d need to disclose that :)
Once you’ve written your AI policy, ask the people you subscribe to, pay to subscribe to, and recommend how they work with AI.
We all deserve to know.
P.S. I did a terrific live about creating your AI policy yesterday with
and .©2025 Substack Writers at Work
This is most true of the latest how-to-hack-Substack folks, of which about 90 percent of what they put out is either AI-generated or stolen from others in this space.
AI Disclosure Policy done. Thanks for raising awareness about the topic!
Using Chat GPT or any AI for research is a big mistake. There are so many errors in results they turn up. Unfortunately, it's harder and hard to do research online without having AI-generated answers dominate your feed.