Why Your LinkedIn Posts Sound Like Everyone Else's
Scroll through LinkedIn for two minutes. Count the posts that open with "In today's rapidly evolving landscape." Count the ones that end with "What do you think? Let me know in the comments." Count the bullet lists with exactly three items, each making a vaguely inspirational point. You stopped counting, didn't you? There are too many. ## The sameness problem AI writing tools did something remarkable: they made publishing easy for everyone. The barrier between "I have a thought" and "I have a post" dropped to almost zero. That was genuinely good. For about six months. Then something happened. Feeds started looking... identical. The same sentence structures. The same transitions. The same forced enthusiasm. Everyone was publishing more, but nobody was saying anything that sounded like them. This was not a bug. It was the system working exactly as designed. ## Why it happens Most AI writing tools work by predicting the most likely next word. That is literally what a language model does. And the most likely next word is, by definition, the most common one. The most average one. The one that the greatest number of people would use. So when you ask a tool to write a LinkedIn post about, say, hiring, you get the same post that everyone else gets. "Hiring is broken. Here are 5 things I learned." The tool is not being lazy. It is giving you the statistical center of all the LinkedIn posts it has ever seen. Your voice is not at the statistical center of anything. That is what makes it yours. ## The workaround that doesn't work Some people try to fix this with detailed prompts. "Write in a casual tone." "Be conversational." "Sound authentic." These instructions help a little. But "casual" and "conversational" are still generic categories. There are a thousand different ways to be casual. The prompt does not tell the model your way. Others try editing after the fact. They take the AI output and rewrite half of it. This works better, but at that point you are doing most of the work anyway. The tool saved you a blank page, not actual time. ## What voice actually means Your writing voice is not one thing. It is dozens of small habits stacked together. The length of your opening sentence. Whether you use questions or statements to make a point. The ratio of abstract ideas to concrete examples. How often you use "I" versus "we" versus "you." Whether you end paragraphs with a punch or let them trail off. Nobody sits down and decides these things. They accumulate over years of writing. They are shaped by what you read, where you grew up, what you studied, who you argue with on the internet. They are hard to articulate, but easy to recognize. That is the real challenge: not generating words, but generating your words. ## A different approach This is the problem that got us thinking about Doppelscript. Instead of starting from a generic model and hoping the prompt steers it toward your voice, we start from your voice itself. You give us samples of your writing. We analyze them and distill your voice into a set of plain-English rules. Not vibes. Not a "tone" slider. Actual, readable rules like "opens with short declarative sentences" or "avoids corporate jargon" or "uses specific numbers instead of vague qualifiers." You can read every rule. Edit the ones that feel off. Delete the ones that don't fit. Then, when you generate a post, the model follows your rules instead of following the statistical average. The result is not perfect on the first try. But it sounds like something you would actually write. And the edits you make afterward are small ones, not rewrites. ## The real question The question is not whether AI should help you write. It already does, and it should. The question is whether the help should flatten everyone into the same voice or amplify the voice you already have. Your feed has enough posts that sound like everyone else. Yours do not need to be another one.