Yesterday, I posted our whitepaper here as a 'discovery' because I was nervous about a cold launch. That was a mistake, and the community rightfully flagged it for lacking transparency. I apologize for the cloak and dagger.
We are reposting this today as an official Show HN to stand behind the tech properly.
The Problem: We built this because we hit the 'Linearity Barrier' with standard agents—after 50+ coding steps, context rot sets in and the agent starts hallucinating.
The Solution: Dropstone uses a Recursive Swarm Topology. Instead of linear prediction, it spawns parallel 'Scout' agents to explore solution paths and uses Entropy Pruning to kill branches that hallucinate.
I'm here to answer any technical questions about our D3 Engine, the latency trade-offs of swarm architecture, or the 'Trajectory Vectors' we use for context management.
You are absolutely right. I addressed this in the thread above, but to be clear: Yes, I am part of the team. I shouldn't have tried to frame it as a 'discovery'. Apologies.
I am incredibly sorry about that. It sounds like the agent hit an infinite reasoning loop and burned your credits—that is a critical failure on our end. I want to fix this personally. Please email me at tom@blankline.org (or just reply here if you prefer). I will refund your $15 immediately.
I'm adding free credits to your account so you can test the D3 update when it drops tomorrow (which patches this loop).
We clearly have work to do on the beta fail-safes.
To clarify the confusion: D2 is the beta engine currently on our website. D3 is the new architecture in these papers. We developed D3 specifically to solve the context limits we hit with D2.
I posted these papers 'anonymously' because I wanted to see if the D3 research (specifically the Trajectory Vectors) stood on its own merit without the bias of a 'Show HN' launch. That was a mistake, and I apologize for the cloak and dagger.
The good news: We are officially releasing the D3 update tomorrow, as per our internal schedule. The papers were just the pre-read. You'll be able to test the 'Flash-Gated Consensus' yourself in 24 hours.
We’ve been exploring how long-term memory can make AI coding agents more collaborative and context-aware. The D2 Engine is part of that effort — it gives Dropstone a persistent understanding of codebases across sessions. Would love feedback from developers experimenting with AI-driven tools.
This is a fascinating study on the impact of context windows on language models. It's interesting to see how smaller context windows can lead to more efficient and accurate language models, even when dealing with complex natural language tasks like question answering. I think this research could have important implications for a wide range of applications, from chatbots and virtual assistants to machine translation and text summarization. I'm looking forward to seeing how these findings are further developed and applied in real-world scenarios.
Yesterday, I posted our whitepaper here as a 'discovery' because I was nervous about a cold launch. That was a mistake, and the community rightfully flagged it for lacking transparency. I apologize for the cloak and dagger.
We are reposting this today as an official Show HN to stand behind the tech properly.
The Problem: We built this because we hit the 'Linearity Barrier' with standard agents—after 50+ coding steps, context rot sets in and the agent starts hallucinating.
The Solution: Dropstone uses a Recursive Swarm Topology. Instead of linear prediction, it spawns parallel 'Scout' agents to explore solution paths and uses Entropy Pruning to kill branches that hallucinate.
I'm here to answer any technical questions about our D3 Engine, the latency trade-offs of swarm architecture, or the 'Trajectory Vectors' we use for context management.
reply