Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
Thanks for the feedback. We'll definitely add a feature list. To answer your question, yes - we support Cursor's features (quick edits, agent mode, chat, inline edits, links to files/folders, fast apply, etc) using open source and openly-available models (for example, we haven't trained our own autocomplete model, but you can bring any autocomplete model or "FIM" model).
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
An important Cursor feature that no one else seems to have implemented yet is documentation indexing. You give it a base URL and it crawls and generates embeddings for API documentation, guides, tutorials, specifications, RFCs, etc in a very language agnostic way. That plus an agent tool to do fuzzy or full text search on those same docs would also be nice. Referring to those @docs in the context works really well to ground the LLMs and eliminate API hallucinations
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
The continue.dev plugin for Visual Studio Code provides documentation indexing. You provide a base URL and a tag. The plugin then scrapes the documentation and builds a RAG index. This allows you to use the documentation as context within chat. For example, you could ask @godotengine what is a sprite?
Cursor’s doc indexing is acc one of the few AI coding features that feels like it saves time. Embedding full doc sites, deduping nav/header junk, then letting me reference @docs inline actually improves context grounding instead of guessing APIs.
Context7 is missing lots of info pieces from the repos it indexing and getting overbloated with similar sounding repos, which is becoming confusing for LLM's.
can you elaborate on how context7 handles document indexing or web crawling. If i connect to the mcp server, will it be able to crawl websites fed to it?
This is a good point.We've stayed away from documentation assuming that it's more of a browser agent task, and I agree with other commenters that this would make a good MCP integration.
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
I agree that on the face of it this is extremely useful. I tried using it for multiple libraries and it was a complete failure though, it failed to crawl fairly standard mkdocs and sphynx sites. I guess it's better for the 'built in' ones that they've pre-indexed
I use it mostly to index stuff like Rust docs on docs.rs and rendered mdbooks. The RAG is hit or miss but I haven’t had trouble getting things indexed.
I've used both Cursor and Aider but I've always wanted something simple that I have full control on, if not just to understand how they work. So I made a minimal coding agent (with edit capability) that is fully functional using only seven tools: read, write, diff, browse, command, ask, and think.
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
> The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
Am I the only one that has had bad experiences with aider? For me each time I've tried it, I had to wrestle with and beg the AI to do what I wanted it to do, almost always ending in me just taking over and doing it myself.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code
2. Ask a question or give a task
3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
Aider is quite configurable, you need to look at the leaderboard and copy one of the high performing model/config setups. Additionally, you need to autoload files such as the readme and coding guidelines for your project.
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
Never used the tool. But it seems both aider and cursor are not at their strongest out of the box? I read similar thing about cursor and doing custom configuration so it picks up coding guidelines etc etc. Is there some kind of agreed best practice standard that is documented or just try and error best practices from users sharing these?
Aider's leaderboard is a baseline "best practice" for model/edit format/mode selection. Beyond that, it's basically whatever you think are best practices in engineering and code style, which you should capture in documents that can serve double duty both for AI and for human contributors. Given that a lot of this stuff is highly contentious it's really up to you as to pick and choose what you prefer.
and what do you do if you value privacy and don't want to share everything in your project with silicon valley, or you don't want to spend $8/hr to watch Claude do your hobby for you?
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I haven't used local models. I don't have the 60+gb of vram to do so.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
Local models just aren't there yet in terms of being able to host locally on your laptop without extra hardware.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
This is exactly the issue I have with copilot in office. It doesn't learn from my style so I have to be very specific how I want things. At that point it's quicker to just write it myself.
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
I really wonder why dynamic learning hasn't been explored more. It would be a huge moat for the labs (everyone would have to host and dynamically train their own model with a major lab). Seems like it would make the AI way smarter too.
> At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
It feels like everyone and their mother is building coding agents these days. Curious how this compares to others like Cline, VS Code Copilot's Agent mode, Roo Code, Kilo Code, Zed, etc. Not to mention those that are closed source, CLI based, etc. Any standout features?
Void dev here! The biggest players in AI code today are full IDEs, not just extensions, and we think that's because they simply feel better to use by having more control over the UX.
There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).
Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.
We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.
I should have been more careful with my wording - I was talking about major VS Code-based IDEs as alternatives. Zed is very impressive, and we've been following them since before Void's launch!
Maybe I live in a bubble, but it's surprising to me that nobody mentions Jetbrains in all these discussions. Which in my professional working experience are the only IDEs anyone uses :shrug:
Their tools are wildly popular in many spaces. It isn't for everyone though. It's totally believable in your circle no one uses their tools, but it isn't niche.
Pycharm is extremely popular in the data science world. The Community Edition is free and has 99% of the features most people need. Even when developing with Cursor, I find myself going back to Pycharm just to use the debugger, which I greatly prefer to the debugger used in these VS Code forks.
> The biggest players in AI code today are full IDEs, not just extensions,
Claude Code (neither IDE nor extension) is rapidly gaining ground, it's biggest current limitation being cost, which is likely to get resolved sooner rather than later (Gemini Code anyone?). You're right about the right now, but with the pace at which things are moving, the trends are honestly more relevant than the status quo.
Just want to share our thinking on terminal-based tools!
We think in 1-2 years people will write code at a systems level, not a function level, and it's not clear to us that you can do that with text. Text-based tools like Claude Code work in our text-based-code systems today, but I think describing algorithms to a computer in the future might involve more diagrams, and terminal will not be ideal. That's our reasoning against building a tool in the terminal, but it clearly works well today, and is the simplest way for the labs to train/run terminal tool-use agents.
Diagrams are great at providing a simplified view of things but they suck ass when it comes to providing details.
There's a reason why fully creating systems from them died 20 years ago - and it wasn't just because the code gen failed. Finding a bug in your spec when its a mess of arrows and connections can be nigh impossible.
This is completely true, and it's a really common objection.
I don't imagine people will want to fully visualize codebases in a giant unified diagram, but I find it hard to imagine that we won't have digests and overviews that at least stray from plaintext in some way.
I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
> I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
Sounds exactly like what DeepWiki is doing from the Devin AI Agent guys: https://deepwiki.com
Hey by the way I hear all communication between people is going to shift to pictograms soon. You know -- emoji and hieroglyphs. Text just isn't ideal, you know
Every system can be translated can be translated to text though. If there is one thing LLMs have essentially always been good at, it is processing written language.
Spending too much time on HN and other spaces (including offline) where people talk about what they're doing. Making LLM-based things has also been my job since pretty much the original release of GPT3.5 which kicked off the whole industry, so I have an excuse.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
I personally tried it and I felt it way more confusing to use compared to using Cursor with Claude 3.7 Sonnet. The CLI interface seems to me more to lend itself for «vibe coding» where you actually never work and look with the actual code. That is why I think Cursor and IDEs are more popular than CLI only tools.
Together with 3.7 Sonnet. And the claim was that it is rapidly gaining ground, not that it sparked initial interest. I still don’t see much proof of adoption. This is actually the first I’ve heard about anyone actually actively using it since its launch.
>This is actually the first I’ve heard about anyone actually actively using it
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.
The versioning and git branching sounds really neat, I think! Can you say more about that? Curious if you've looked at/are considering using Jujutsu/JJ[0] in addition or instead of git for this, I've played with it some, but been considering trying it more with new AI coding stuff, it feels like it could be a more natural fit than actually creating explicit commits for every change, while still tracking them all? Just a thought!
Interesting, thanks for sharing! We planned on spinning up a new Git branch and shallow Git clone (or possibly worktree/something more optimized) for each agent, and also adding a small auto-merge-with-LLM flow, although something more granular like this might feel better. If we don't use a versioning tool like JJ at first (may just use Git for simplicity at first), we will certainly consider it later on, or might end up building our own.
If you're open to something CLI-based, my project Plandex[1] offers git-based branching (and granular versioning) for AI coding. It also has a sandbox (also built on git) that keeps cumulative changes separate from project files until they're ready to apply.
Isn't continue.dev also open source and not using 'their backend' when sending stuff? I didn't use it in a while, but I know it had support for llama, local models for tab completions, etc.
The extensions API lets you control the sidebar, but you basically don't have control over anything in the editor. We wouldn't have been able to build our inline edit feature, or our navigation UI if we were an extension.
Big fan of Continue btw! There's a small difference in how we handle inline edits - if you've used inline edits in Cursor/Windsurf/Void you'll notice that a box appears above the text you are selecting, and you can type inside of it. This isn't possible with VS Code extensions alone (you _have_ to type into the sidebar).
If I understand your question correctly - Cline and Roo both display diffs by using built-in VS Code components, while Cursor/Windsurf/Void have built their own custom UI to display diffs. Very small detail, and just a matter of preference.
It's about whether the tool can edit just a few lines of the file, or whether it needs to stream the whole file every time - in effect, editing the whole file even though the end result may differ by just a few lines.
I think editing just a part of the file is what Roo calls diff editing, and I'm asking if this is what the person above means by line edits.
I think it'd be worthwhile to call out in a FAQ/comparison table specifically how something like an "AI powered IDE" such as Cursor/Void differs from just using an IDE + a full-featured agentic plugin (VS Codium + Cline).
I agree, having used Cline I am not sure what advantages this would offer, but I would like to know (beyond things like “it’s got an open source ide” - Cline has those too specifically because I can use it in my open source ide)
I think it's worth mentioning that the Theia IDE is a fully open source VS Code-compatible IDE (not a fork of VS Code) that's actively adding AI features with a focus on transparency and hackability.
We considered Theia, and even building our own IDE, but obviously VSCode is just the most popular. Theia might be a good play if Microsoft gets more aggressive about VSCode forks, although it's not clear to us that people will be spending their time writing code in 1-2 years. Chances are definitely not 0 that we end up moving away from VSCode as things progress.
It's the most popular because the tech is decades old. You're all rushing to copy obsolete technology. Now we have 10 copies of an obsolete technology.
I mean I guess I should thank the 10 teams who forked VSCode for proving beyond all reasonable doubt that VSCode is architecturally obsolete. I was already trying to make that argument, but the 10 forks do it so much better.
Yep, Void is a VSCode fork, but we're definitely not wed to VSCode! Building our own IDE/browser-port is not out of the picture. We'll have to see where the next iteration of tool-use agents takes us, but we strongly feel writing typescript/rust/react is not the endgame when describing algorithms to a computer, and a text-based editor might not be ideal in 10 years, or even 2.
openAI chose to acquire windsurf for 3B instead of building something like Void, very curious decision. awesome project, will be closely following this
>> The biggest players in AI code today are full IDEs, not just extensions
Are you sure? I have some expertise with my IDE, some other extension which solve problems for me, a wide range of them, I've learnt shortcuts, troubleshooting, where and who ask for help, but now you're telling me that I am better off leaving all that behind, and it's better for me? ;o
My 2c: I rarely need agent mode. As an older engineer, I usually know what exactly needs to be done and have no problem describing to the LLM what to do to solve what I'm aiming to do. Agent mode seems its more for novice developers who are unsure what tasks need to be broken down and the strategy that they are then solved.
I’m a senior engineer and I find myself using agents all the time. Working on huge codebases or experimenting with different languages and technologies makes everybody “novice”.
Agent mode seems to be better at realizing all the places in the code base that need to be updated, particularly if the feature touches 5+ files, whereas editor starts to struggle with features that touch 2-3 files. "every 60 ticks, predict which items should get cached based on user direction of travel, then fetch, transform and cache them. when new items need to be drawn, check the cache first and draw from there, otherwise fetch and transform on demand." this touches the core engine, user movement, file operations, graphics etc and agent mode seems to have no problem with this at all.
Personally, I’ve found agents to be a great “multitasking” tool.
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
Same here. It’s fine for me to use the ChatGPT web interface and switch between it and my IDE/editor.
Context switching is not the bottleneck. I actually like to go away from the IDE/keyboard to think through problems in a different environment (so a voice version of chatgpt that I can talk to via my smartwatch while walking and see some answers either on my smartglasses or via sound would be ideal… I don’t really need more screen (monitor) time)
Sorry to say but this workflow just isn't great unless you're working on something where AI models aren't that helpful -- obscure language/libraries/etc where they hallucinate or write non-idiomatic solutions if left to run much by themselves. In that case, you want the strong human review loop that comes from crafting the context via copy paste and inspecting the results before copying back.
For well trodden paths that AI is good at, you're wasting a ton of time copying context and lint/typechecking/test results and copying back edits. You could probably double your productivity by having an agentic coding workflow in the background doing stuff that's easy while you manually focus on harder problems, or just managing two agents that are working on easy code.
20yrs engineer here, all my life I've dreamed of having something that I could ask general questions about a codebase to and get back a cohesive, useful answer. And that future is now.
I would put it more generic. I love that one can now ask as many dumb questions as it takes about anything.
With humans there is this point where even the most patient teacher has to move on to do other things. Learning is best when one is curios about something and curiosity is more often specific. (When generic one can just read the manual)
I dont agree. I use agents all the time. I say exactly what the agent should do but often changes need to be made in more than one place in the code base. Could I prompt it for every change one at a time per file? Sure, but it is faster do prompt an agent for it.
At it's most basic, agentic mode is necessary for building the proper context. While I might know the solution at the high level, I need the agent to explore the code base to find things I reference and bring them into context before writing code.
Agentic mode is also super helpful for getting LLMs from "99%" correct code to "100%" correct code. I'll ask them to do something to verify their work. This is often when the agent realizes it hallucinated a method name or used a directionally correct, but wrong column name.
"Novice mode" has always been true for the newcomer. When I was new, I really was at the mercy of:
1) Authority (whatever a prominent evangelist developer was peddling)
2) The book I was following as a guide
3) The tutorial I was following as a guide
4) The consensus of the crowd at the time
5) Whatever worked (SO, brute force, whatever library, whatever magic)
It took a long ass time before I got to throw all five of those things out (throw the map away). At the moment, #5 on that list is AI (whatever works). It's a Rite of Passage, and because so much of being a developer involves autodidacticism, this is a valley you must go through. Even so, it's pretty cool when you make it out of that valley (you can do whatever you want without any anxiety about is this the right path?). You are never fearful or lost in the valley(s) for the most part afterward.
Most people have not deployed enough critical code that was mostly written with AI. It's when that stuff breaks, and they have to debug it with AI, that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again. The thing is, we can never not use AI ever again. So, this is the trial by fire where many will figure out the depth of the valley and emerge from it with all the lessons. I can only speculate, but I suspect the lessons will be something along the lines of "some things should use less AI than others".
I think it's a cool journey, best of luck to the AI-first crowd, you will learn lessons the rest of us are not brave enough to embark on. I already have a basket of lessons, so I travel differently through the valley (hint: My ship still has a helm).
> that's when they'll have to contend with the blood, sweat, and tears.
Or, most software will become immutable. You'll just replace it.
You'll throw away the mess, and let a newer LLM build a better version in a couple of days. You ask the LLM to write down the specs for the newer version based on the old code.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Do we really want that? To be beholden to the hands of a few?
Hell, you can't even purchase a GPU with high enough VRAM these days for an acceptable amount of money. In part because of geopolitics. I wonder how many morerestrictions are there to come.
There's a lot of FOMO going around, those honing their programming skills will continue to thrive, and that's a guarantee. Don't become a vassal when you can be a king.
The scenario you paint sounds very implausible for non-trivial applications, but even if it ends up becoming the development paradigm, I doubt anyone will be "left behind" as such. People will have time to re-skill. The question is whether some will ever want to or would prefer to take up woodworking.
Whether one takes up woodworking or not depends on whether or not development was primarily for profit, with little to not intrinsic enjoyment of the role.
Coding and woodworking are similiar from my perspective, they are both creative arts. I like coding in different lanuages, woodworking is simply a physical manifestion of such. In a world where you only need agents, is not a world where nerds will be employed. Traditional nerds cant stand out from the crowd anymore.
This is peak AI, it only goes downhill from here in terms of quality, the AI first flows will be replaceable. Those offshored teams that we have suffered with for years now will be primarly replaced (google first programmers). And developers will continue, working around the edges. The differences will be that startups wont be able to use technology horading to stifle competition, unless they make themselves immune from the ai vacumes.
I can appreciate the comments further up around how AI can help unravel the mysterys of a legacy codebase. Being able to ask questions about code in quick succession will mean that we will feel more confident. AI is lossy, hard to direct, yet very confident always. We have 10k line functions in our legacy code that nests and nest. How confident are you to let ai go and refactor this code without oversight and ship to a customer? Thus far im not, maybe i dont know the best model and tools to use and how to apply them, but even if one of those logic branches gets hallucinated im in for a very bumpy ride. Watching non-technical people at my org get frusted and stuck with it in a loop is a lot more common then the successes which seem to be the opposite of the experienced engineers who use it as a tool, not a savour. But every situation is different.
If you think you company can be a differentiator in the market because it has access to the same AI tool as every other company? We'll well see about that. I believe there has to be more.
Im an experienced engineer of 30+yrs. Technology comes and goes. AI is just a another tool in the chest. I use it primarily because i dont have to deal with ads. I also use it to be a electrical engineer, designing circuts in areas i am not familiar with. I can see very simply the noivce side of the coin, it feels like you have super powers because you just dont know enough about the subject to be aware of anything else. Its sped up the learning cycle considerably beacause of the conversational nature. After a few years of projects, i know how to ask better questions to get better results.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Not even, devoured might be more apt. If I'm manually moving through this valley and a flood is coming through, those who are sticking automatic propellers and navigation systems on their ship are going to be the ones that can surf the flood and come out of the valley. We don't know, this is literally the adventure. I'm personally on the side of a hybrid approach. It's fun as hell, best of luck to everyone.
It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean? These are risks we all take.
Quoting the Admiral from Starcraft Broodwars cinematic (I'm a learned person):
> It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean?
Not sure if you drew the right conclusion from that one.
Considering that Agent Mode saves me a lot of hassle doing refactoring ("move the handler to file X and review imports", "check where this constant is used and replace it with <another> for <these cases>", etc.), I'd say you are missing the point...
I actually flip things - I do the breakdown myself in a SPEC.md file and then have the agent work through it. Markdown checklists work great, and the agent can usually update/expand them as it goes.
I think this perspective is better characterized as “solo” and not “old”. I don’t think your age is relevant here.
Senior engineers are not necessarily old but have the experience to delegate manageable tasks to peers including juniors and collaborate with stakeholders. They’re part of an organization by definition. They’re senior to their peers in terms of experience or knowledge, not age.
Agentic AIs slot into this pattern easily.
If you are a solo dev you may not find this valuable. If you are a senior then you probably do.
One benefit is when working on multiple code bases where the size of the code base is larger than the time spent working on it, so there is still a gap of knowledge. Agents don't guarantee the correctness of a search the same an old search field does, but it offers a much more expressive way to do searches and queries in a code base.
Now that I think about it, I might have only ever used agents for searching and answering questions, not for producing code. Perhaps I don't trust the AI to build a good enough structure, so while I'll use AI, it is one file at a time sort of interaction where I see every change it makes. I should probably try out one of these agent based models for a throw away project just to get more anecdotes to base my opinion on.
Coding agents are the future and it's anyone's game right now.
The main reason I think there is such a proliferation is it's not clear what the best interface to coding agents will be. Is it in Slack and Linear? Is it on the CLI? Is it a web interface with a code editor? Is it VS Code or Zed?
Just like everyone has their favored IDE, in a few years time, I think everyone will have their favored interaction pattern for coding agents.
Product managers might like Devin because they don't need to setup an environment. Software engineers might still prefer Cursor because they want to edit the code and run tests on their own.
Cursor has a concept of a shadow workspace and I think we're going to see this across all coding agents. You kick off an async task in whatever IDE you use and it presents the results of the agent in an easy to review way a bit later.
As for Void, I think being open source is valuable on it's own. My understanding is Microsoft could enforce license restrictions at some point down the road to make Cursor difficult to use with certain extensions.
I've tried many of AI coding IDE's, the best ones like RooCode are good simply because they don't gimp your context. The modern day models are already more then capable enough for many coding tasks, you just need to leave them alone and let them utilize their full context window and all will go well. If you hear a bad experience with any of these IDE's, most of the time its because its limiting use of context or improper management of related functions.
We think terminal tools like Claude Code are a good way for research teams to experiment with tool use (obviously pure text), but definitely don't see the terminal as the endgame for these tools.
I know some folks like using the terminal, but if you like Claude Code you should consider plugging your API key into Void and using Claude there! Same exact model and provider and price, but with a UI around the tool calls, checkpoints, etc.
That doesn't really narrow it down much, YC has backed so many AI coding tools that they've started inbreeding. PearAI (YC Fall '24) is a fork of Continue (YC Summer '23).
One of the founders here - Void will always remain open source! There are plenty of examples of an open source alternative finding its own niche (eg Supabase, Mattermost) and we don't see this being any different.
I've been at many open source meetups with YC founders and can tell you that this is not the thinking at all. Rather, the emphasis is on finding a good carve-line between the open source offering and the (eventual) paid one, so that both sides are viable and can thrive.
Most common these days is to make the paid product be a hosted version of the open source software, but there are other ways too. Experienced founders emphasize to new startups how important it is to get this right and to keep your open source community happy.
No one I've heard is treating open source like a bait and switch; quite the opposite. What is sought is a win-win where each component (open source and paid) does better because of the other.
I think there’s a general misconception out there that open sourcing will cannibalize your hosted product business if you make it too easy to run. But in practice, there’s not a lot of overlap between people who want to self-host and people who want cloud. Most people who want cloud still want it even if they can self-host with a single command.
The weird thing is, the biggest reason I don't use Cursor much is because they just distribute this AppImage, which doesn't install or add itself to the Ubuntu app menu, it just sits there and I have to do
The setuid sandbox is not running as root. Common causes:
* An unprivileged process using ptrace on it, like a debugger.
* A parent process set prctl(PR_SET_NO_NEW_PRIVS, ...)
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted
I have to go Googling, then realize I have to run it with
Often I'm lazy to do all of this and just use the Claude / ChatGPT web version and paste code back and forth to VS code.
The effort required to start Cursor is the reason I don't use it much. VS code is an actual, bona fide installed app with an icon that sits on my screen, I just click it to launch it. So much easier. Even if I have to write code manually.
AppImageLauncher improves the AppImage experience a lot, including making sure they get added to the menu. I'm not sure if it makes launching without the sandbox easier or not.
Not only did you mess up the formatting, but you pasted a very lengthy code, generated by an LLM. Perhaps consider using a pastebin in the future, if at all.
Yup - honestly the space is so open right now still, everyone is trying haha. It's got quite hard to keep track of different models and their strengths / weaknesses, much less the IDE and editor space! I have no idea which of these AI editors would suite me best and a new one comes out like every day.
I'm still in vim with copilot and know I'm missing out. Anyway I'm also adding to the problem as I've got my own too (don't we all?!), at https://codeplusequalsai.com. Coded in vim 'cause I can't decide on an editor!
There's so much happening in this space, but I still haven't seen what would be the killer feature for me: dual-mode operation in IDE and CLI.
In a project where I already have a lot of linting brought into the editor, I want to be able to reuse that linting in a headless mode: start something at the CLI, then hop into the IDE when it says it's done or needs help. I'd be able to see the conversation up to that point and the agent would be able to see my linting errors before I start using it in the IDE. For a large, existing codebase that will require a lot of guardrails for an agent to be successful, it's disheartening to imagine splitting customization efforts between separate CLI and IDE tools.
For me so far, cursor's still the state of the art. But it's hard to go all-in on it if I'll also have to go all-in on a CLI system in parallel. Do any of the tools that are coming out have the kind of dual-mode operation I'm interested in? There's so many it's hard to even evaluate them all.
I posted this the other day, but didn't get a response:
Does anyone think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? For instance, would an AI art tool with sculpting and drawing benefit from being open source?
I've talked with VCs that love open source developer tools, but they seem to hate on the idea of "open creative tools" for designers, illustrators, filmmakers, and other creatives. They say these folks don't benefit from open source. I don't quite get it, because Blender and Krita have millions of users. (ComfyUI is already kind of in that space, it's just not very user-friendly.)
Why do investors seem to want non-developer things to be closed source? Are they right?
I think it’s mostly a value capture thing. More money to be made hooking devs in then broke creatives and failing studios (no offense, it just seems like creatives are getting crushed right now). In one case you’re building for the tech ecosystem, in the other for the arts. VC will favor tech, higher multiples. Closed source is more protected from theft etc in many cases.
But as you point out there are great solutions so it’s clearly not a dead end path.
Its agent is a lot worse than Cursor's in my experience so far. Even tab edits feel worse.
My understanding is that these are not custom models but a combination of prompting and steering. That makes Cursor's performance relative to others pretty surprising to me. Are they just making more requests? I wonder what the secret sauce is.
One thing I noticed is that there's no cost tracking, so it's very hard to predict how much you're spending. This is fine on tools like Cursor that are all inclusive, but is something that is really necessary if you're bringing your own API keys.
This is a great suggestion. We're actually storing the input/output costs of most models, but aren't computing cost estimates yet. Definitely something to add. My only hesitation is that token-based cost estimates may not be accurate (most models do not provide their tokenizers, so you have to eg. estimate the average number of characters per token in order to compute the cost, and this may vary per model).
It'd probably be useful to just show cost after the fact based on the usage returned from the API. Even if I don't know how much my first request will cost, if I know my last request cost x cents then I can probably have a good idea from there.
This is very cool and I'm always happy to see more competition in this space. That said, two suggestions:
- The logo looks like it was inspired directly from the Cursor logo and modified slightly. I would suggest changing it.
- It might be wise to brand yourself as your own thing, not just an "open source Cursor". I tend to have the expectation that "open source [X]" projects are worse than "[X]". Probably unfair, I know.
Thanks for the suggestions - these issues have been a bit painful for us, and we will probably fix them in the next major update to Void.
Believe it or not, the logo similarity was actually unintentional, though I imagine there was subconscious bias at play (we created ours trying to illustrate "a slice of the Void").
A minor counterpoint, I personally like the "open source Xyz" because I instantly know what the product is supposed to do. It's also very SEO friendly because you don't know the name of the open source version before you find it, so you can Kagi/Google/DDG "open source Cursor" and get it as a top result, instead of a sea of spammy slime.
> I personally like the "open source Xyz" because I instantly know what the product is supposed to do.
But that assumes that you're already familiar with the non-open-source software referenced. I've never used Cursor so I have no idea what it can or can't do. I'm pretty sure I would never have discovered Inkscape if it had consistently been described as an “open-source Illustrator” as I've simply never used Adobe software.
I mostly use Cursor for the monthly flat pricing which allows me unlimited (slow) calls to most LLMs (Gemini 2.5 Pro, Claude 3.7, etc) without worrying about spending anything more than $20/month.
Void dev here! As others have mentioned, VSCode strongly limits the functionality that you can build as an extension. A few things we've built that aren't supported as an extension:
- the Accept|Reject UI and UX
- Cmd+K
- Control over the terminal and tabs
- Custom autocomplete
- Smaller things like ability to open/close the sidebar, onboarding, etc
It's been a lot harder to build an IDE than an extension, but we think having full control over the IDE (whether that's VSCode or something else we build in the future) will be important in the long run, especially when the next iteration of tool-use LLMs comes out (having native control over Git, the UI/UX around switching between iterations, etc).
>Smaller things like ability to open/close the sidebar
Are you sure about this one? I'm sure I have used an extension whose whole purpose was to automatically open or close the sidebar under certain conditions.
As an (ex) VSCode extension developer, VSCode really does lock down what you can do as an extension. It's well intentioned and likely led to the success of VSCode, but it's not great if you want to build entirely new UI interactions. For instance, something like the cmd-k inline generation UI in Cursor is basically impossible as a VSCode extension.
The restrictive extension ecosystem was a big part of VSCode's success. You can compare to Atom, which allowed extensions to do whatever they wanted: Atom ended up feeling exceptionally slow and bloated because extensions had full latitude to grind your IDE to a halt.
But since there seems to be a need for AI-powered forks of VS Code, it could make sense for them all to build off the same fork, rather than making their own.
Eclipse Theia can host VSCode extensions, but it also has its own extension mechanism that offers more customization, it could be a viable alternative: https://theia-ide.org/docs/extensions/
You're right that extensions do manage fine - the main differences right now are UX improvements (many of them are mentioned above). I can see the differences compounding at some point which is why we're focused on the full IDE side.
One of the big _disadvantages_ is that it prevents access to the VSCode-licensed plugins, such as the good C# LSP (seems EEE isn't completely dead). That's something to pay attention to if you're considering a fork and use an affected language.
Since these products supposedly make developers 1000x more productive it should be no problem to just re-implement those proprietary MS plugins from scratch. Right? Any volunteers...?
MS will be tuning Copilot to the point it’s the best agent for C#, for sure. It might take a little longer ofc. But Nadella mentioned to Zuck in a fireside chat that they are not happy with C# support in LLMs and that they are working on this.
Did you mean to say a debugger? That one has an open alternative (NetCoreDbg) alongside a C# extension fork which uses it (it's also what VS Codium would install). It's also what you'd use via DAP with Neovim, Emacs, etc.
Omnisharp is what the base C# extension used previously. It has been replaced by Roslyn LS (although can be switched to back still). You are talking about something you have no up-to-date knowledge of.
I wish all these companies the best and I understand why they’re forking, but personally I really don’t want my main IDE maintained by a startup, especially as a fork. I use Cursor, and I’ve run into a number of bugs at the IDE level that have nothing to do with the AI features. I imagine this is only going to get worse over time.
Of course I got downvoted (but it’s gone back to four now) because this is HN, where somehow a group of otherwise seemingly intelligent people are all patting themselves on the back about the latest Y Combinator AI slop funding.
I've just installed it and tried to have it create a hello world using gemma3:27b-it-qat through ollama but it refused to do it claiming it doesn't have access to my filesystem.
Then I opened an existing file and asked it to modify a function to return a fixed value and it did the same.
I'm an absolute newb in this space so if I'm doing something stupid I'd appreciate it if you helped me correct it because I already had the C/C++ extension complain that it can only be used in "proper vscode" (I imported my settings from vscode using the wizard) and when this didn't work either it didn't spark joy as Marie Kondo would say.
Please don't get me wrong, I gave this a try because I like the idea of having a proper local open source IDE where I can run my own models (even if it's slower) and have control over my data. I'm genuinely interested in making this work.
Thanks for writing! Can you try mentioning the file with "@"? Smaller models sometimes don't realize that they should look for files and folders, but "@" always gives the full context of whatever is in the file/folder directly to them.
Small OSS models are going to get better at this when there's more of a focus on tool-use, which we're expecting in the next iteration of models.
Something I was thinking — if Microsoft keeps locking things down for forks (which they sorta are), I wonder if the Void devs would ever pivot to forking other editors like Zed, or if they’re just gonna keep charging headfirst into the wave.
May I ask why did you decide against starting with (Eclipse) Theia instead of VSCode?
It's compatible but has better integration and modularity, and doing so might insulate you a bit from your rather large competitor controlling your destiny.
Or is the exit to be bought by Microsoft? By OpenAI? And thus to more closely integrate?
If you're open-source but derivative, can they not simply steal your ideas? Or will your value depend on having a lasting hold on your customers?
I'm really happy there are full-fledged IDE alternatives, but I find the hub-and-spoke model where VSCode/MS is the only decider of integration patterns is a real problem. LSP has been a race to the bottom, feature-wise, though it really simplified IDE support for small languages.
Not sure if this feedback is useful but I personally tried Void this morning for about 10 mins on a flutter project (after connecting all the various extensions and keys, which was completely painless).
However, I uninstalled due to the sounds it made! A constant clicking for some (unannounced) background work is bizarre choice for any serious development environment.
As others have mentioned please add more docs / details to the README
I want to mention my current frustration with cursor recently and why I would love an OSS alternative that gives me control; I feel cursor has dumped agentic capabilities everywhere, regardless of whether the user wants it or not. When I use the Ask function as opposed to Agent, it seems to still be functioning in an agentic loop. It takes longer to have basic conversations about high level ideas and really kills my experience.
I hope void doesn’t become an agent dumping ground where this behavior is thrust upon the user as much as possible
Not to say I dislike agent mode, but I like to choose when I use it.
Given that there's a dozen agentic coding IDEs, I only use Cursor because of the few features they have like auto-identification of the next cursor location (I find myself hitting tab-tab-tab-tab a lot, it speeds up repetitive edits). Are there any other IDEs that implement these QOL features, including Void (given it touts itself specifically as a Cursor alternative)?
I think QOL will shift away from your keyboard. Give Claude Code a try and you’ll understand what I mean. Developer UX will shift away from traditional IDEs. At this point I could use notepad for the the type of manual work I do vs how I orchestrate Claude Code.
The reason I have never bothered with Claude Code (or even other agentic tools), is that I still code mostly by hand.
When I am using LLMs, I know exactly what the code should be and just am using it as a way to produce it faster (my Cursor rules are extremely extensive and focused on my personal architecture and code style, and I share them across all my personal projects), rather than producing a whole feature. When I try and use just the agent in Cursor, it always needs significant modifications and reorganization to meet my standards, even with the extensive rules I have set up.
Cursor appeals to me because those QOL features don't take away the actual code writing part, but instead augment it and get rid of some of the tedium.
This is a good question. Because we're open source, we will always allow you to host models locally for free, or use your own API key. This makes monetization a bit difficult in the short term. As with many devtool companies, the long-term value comes from enterprise sales.
Though, since I specifically mentioned agentic, I wanted to exclude non-agentic tools like prompt builders and context managers that you linked. :)
Reason being: my idea of agents is to generalize well enough, so the need for workflow based apps isn't needed anymore.
During discovery and planning phase, the agents should traverse the code base with a retrieval strategy packaged as a tool (embedded search, code-graphs, ...) and then add that new knowledge to the plan before executing the code changes.
I think it's really interesting that Void (and Zed) are both much more tastefully designed than Cursor, Windsurf or VSCode (though I wouldn't have expected VSCode to be well designed)
As a data scientist, my main gripe with all these AI-centric IDEs is that they don’t provide data centric tools for exploring complex data structures inherent to data science. AI cannot tell me about my data, only my code.
I’ll be sticking with VSCode until:
- Notebooks are first class objects. I develop Python packages but notebooks are essential for scratch work in data centric workflows
- I can explore at least 2D data structures interactively (including parquet). The Data Wrangler in VSCode is great for this
I wonder why most agentic patterns don't use multiple different retrieval strategies simultaneously and why most of them don't use CodeGraph 1 during discovery phase. Embeddings aren't enough, Agent induced function/class name search isn't enough.
A trajectory question: do we still have the debate that whether open-source software takes away SDE jobs or makes the pie grow bigger to create more jobs? The booming OS community in the past seem have created multiple billion-dollar markets. On the other hand, we have a lot less growth than before now, and I was wondering if OSS has started to suppressing the demand of SDEs.
How is it that the open source Cursor 'alternative' doesn't have a linux option (either via AppImage, as Cursor offers, or something like a flatpak). I understand that open source does not automatically mean linux, but it is like, weird right?
Projects like this are great because open source versions need to figure out the right way to do things, rather than the hacky, closed, proprietary alternatives that pop up first and are just trying to consume as many users as possible to get a most quickly.
In that case, a shitty, closed system is good actually because it's another thing your users will need to "give up" if they move to an alternative. By contrast, an open ide like void will hopefully make headway on an open interface between ides and the llm agents in such a way that it can be adapted by neovim people like me or anyone else for that matter
Emacs' configurability is hard to describe to anyone who hasn't immersed themselves in that sort of environment. There's a small portion of the program written C, but the bulk of it is written in elisp. When you evaluate elisp code, you're not in some sandboxed extension system - you're at the same level as Emacs itself. This allows you to modify nearly any aspect of Emacs.
It'd be a security nightmare if it was more popular, but fortunately the community hovers around being big enough for serious work to be done but small enough that it's not worth writing malware for.
I don't know if it's a security nightmare any more than other editors that have "plugins" (or the like).
One advantage for Emacs is it's both easy and common read the code of the plugins you are using. I can't tell you the last time I looked at the source code of a plugin for VS Code or any other editor. The last time I looked at the code for a plugin in Emacs was today.
I don't think it's a security nightmare per-se. Most of the time, you're not installing a lot of packages (the built-in are extensive) and most of these are small and commonly used.
It's like saying the AUR is a security nightmare. You're just expected to be an adult and vet what you're using.
I'm not sure I agree with the number and size of packages people install (unless you're comparing them to, say, org-mode), but that's not really what I'm talking about.
Emacs runs all elisp code as if it's part of Emacs. Think about what Emacs is capable of, and compare that to what a browser allows its extensions to do. No widely used software works like that because it's way too easy to abuse. Emacs gets away with it because it's not widely used.
I don't know the first thing about VSCode but I'm willing to bet there are strict limits to what its plugins are allowed to do.
I don't know if that's changed since last I wrote an extension for a web browser, but the API is pretty open for the current context (tab) that it's executing in. As long as it's part of the API, the action is doable. Same with VSCode or Sublime. Sandboxed plugins would be pretty useless.
I guess it's hard to switch from a working setup that you've invested time in.
Especially since you might not be familiar with the new one.
Personally, I'm trying out things in VS Code, just to see how they work. But when I need to work, I do it in Emacs, since I know it better.
Also, with VS Code, just while trying it out, simple things like cut & paste would stop working (choosing them from the menu, they would work, but trying to cut & paste with the key shortcuts and the mouse, wouldn't). You'd have to refresh the whole view or restart it, for cut & paste to become available again.
it's a shame vim is so stinky because after 15 years of using it now i find myself using vscode. I always like vim because editing is efficient. Now I dont write as much as supervise a lot of the boilerplate code.
Over the years I have gotten better with vim, added phpactor and other tooling, but frankly i dont have time to futz and its not so polished. With VSCode I can just work. I don't love everything about it, but it works well enough with copilot i forgot the benefits of vim.
I get your experience, but for me using vim is perfect for code exploration. The only needed plugins are fzf.vim and vinegar. The first for fuzzy navigation and the second for quickly browsing the current directory.
LSP experience with VSCode may be superior, but if I truly needed that, I would get an IDE and have proper intellisense. The LSP in Vim and Emacs is more than enough for my basic requirements, which are auto-imports, basic linting, and autocomplete to avoid misspellings. VSCode lacks Vim's agility and Emacs's powerful text tooling, and do far worse on integration.
On a tangent, I get the feeling that the more senior you are, the less likely you are to end up using one of these VIDEs. If you do use any coding assistants at all, it will mostly be for the auto-complete feature - no 'agent mode' malarkey.
Maybe it's just me, but the auto-complete is very distracting and something I avoid. Most of the time I'm fighting it, deleting or denying it's suggestions, and it throws me out of flow.
From what I've seen, most senior/staff-level engineers are working for big corps which have limited contracts with providers like Github Copilot, which until recently only gave access to autocomplete.
I prefer the web-based interface. It feels like my choice to reference a tool. It's easy to run multiple chats at once, running prompts against multiple models.
That's very interesting. This is certainly what I was doing before Copilot. Now I let it autocomplete but only sometimes when it makes sense. I guess I am used to the keybinds so that I can undo if I don't like it.
When I was reading your comment I thought that there is a space for an out-of-flow coding assistant, i.e. rather than deploy an entire IDE with extension, the assistant can be just a floating window (I guess chatgpt does that) and is able to dive in and out or just suggest as you type along.
When browsing a GitHub repo, there's an option for "assistive chat" with copilot. -- I've found this a useful interface to get the LLM to answer quick questions about the repository without having to dig through myself.
Beyond autocomplete, I've found the LLM to be useful in some cases: sometimes you'll want to make edits which are quite automatic, but can't quite be covered by refactoring tools, and are a bit more involved than search/replace. LLMs can handle that quite well.
For unfamiliar domains, it's also useful to be able to ask an LLM to troubleshoot / identify problems in code.
Early when sonnet 3.5 was the best coding model lots of people used them because of the rate limits on anthropic's own API. So that's a plus. There's also the ease of use, one key for every model out there, and you get to choose providers for things like deepseek / qwen / llama if those suit your needs.
I subscribed to the mailing list of void long ago to be notified once the alpha opens, but i've never recieved anything. I forgot about it until today.
Really interesting from a 'in a bubble' point of view. I've been using Void for the past few weeks as a replacement for Bolt, Lovable, Tempo and the rest. Which is nothing like the use cases mentioned in this thread. Just shows how we're each focused on different parts of an environment? Of course I'm not a programmer, I'm just a slash-and-hack vibe coder. :)
For the record, I really like Void. It's great at utilising local models, which no-one else does. Although I'd love to know which are the best Ollama local coding models. I've failed with a few so the moment I'm sticking to Sonnet 3.7 and GPT 4.1. With 03 as the 'big daddy'. :)
Anyway, I didn't know what your service was trying to do so I clicked on the homepage, clicked Sources to see what else was there, it cited <https://extraakt.com/extraakts?source=reddit#:~:text=Open-so...> but the hyperlink took me back to the HN thing, which defeats the purpose of having a source picker
Mandatory reminder that "agentic coding" works way worse than just using the LLM directly, steering it ad needed, filling the gaps, and so forth. The value is in the semantical capabilities of the LLM itself: wrapping it will make it more convenient to use, but always less powerful.
I beg to disagree, Salvatore... Have a go at VS Code with Agent mode turned on (you'll need a paid plan to use Claude and/or Gemini, I think). It gets me out of vim, so yeah, it's that good. :)
Tip: Write a SPEC.md file first. Ask the LLM to ask _you_ about what might be missing from the spec, and once you "agree" ask it to update the SPEC.md or create a TODO.md.
Then iterate on the code. Ask it to implement one of the features, hand-tune and ask it to check things against the SPEC.md and update both files as needed.
Works great for me, especially when refactoring--the SPEC.md grounds Claude enough for it not to go off-road. Gemini is a bit more random...
Interacting with the LLM directly is more work. What I mean is that a wrapper in the best conditions will not damage too much the quality of the LLM itself. In the chat, you continuously avoid suboptimal choices executed by the model, if you are a very experienced code, and a fix here, a fix there, you continuously avoid local minima. After a few iterations you find that your whole project is a lot better designed than otherwise.
Oh, sure. I don't rely on the LLM to write all the code. I do "trust" it to refactor things to a shape I want, and to automate chores like moving chunks of code around or build a quick wrapper for a library.
One thing I particularly don't like about LLMs is that their Python code feels like Java cosplay (full of pointless classes and methods), so my SPEC.md usually has a section on code style right at the start :)
they always start as open source to bait users. how long until this one also turns into BaitWare? I hope it won't since it's backed by Y Combinator and has an Apache 2 license.
(Edit: the parent comment was edited to add "I hope it won't since it's backed by Y Combinator and has an Apache 2 license." - that's a good redirection, and I probably wouldn't have posted a mod reply if the comment had had that originally.)
(Btw if your comment already has replies, it is good to add "Edit:" or something if you're changing it in a way that will alter the context of replies.)
---
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
They first need to substantially grow the user base as we saw with OpenWebUI, only then make an Enterprise offering and switch the license from one day to another.
Yes, they've modified the licence to require preserving their branding. I guess it's an anti-fork measure, you'd have to infringe either on their licence or on their trademark.
The best reason I‘ve seen mentioned by the founder in this thread, is showing/hiding panels, and the onboarding flow. Those are things you can’t do with a plugin. I personally also like Cursor‘s diff view way better than Continue‘s, and maybe that’s because a fork gives more control there.
If I move off Cursor, it's def not going to be to another vs-code derivative.
Zed has it right - build it from the ground up, otherwise, MS is going to kneecap you at some point.
Zed didn't build from the ground up though. I mean, they did for a lot of stuff, but crucially they decided to rely on the LSP ecosystem so most of the investment in improving Zed is also a direct investment in improving VSCode.
If you can't invest in yourself without making the same size investment in your competitor, you probably have no path to actually win out over that competitor.
Additionally, Zed is written in Rust and has robust hardware-accelerated rendering. This has a tangible feel that other editors do not. It feels just so smooth, unlike clunky heavyweight JetBrains products. And it feels solid and sturdy, unlike VS Code buffers, which feel like creaky webviews.
But it's a different take, Brokk is built to let humans supervise AI more effectively rather than optimizing for humans reading and writing code by hand. So it's not a VS Code fork, it's not really an IDE in the traditional sense at all.
1. Create a branch called TaskForLLM_123
2. Add a file with text instructions called Instructions_TaskForLLM_123.txt
3. Have a GitHub action read the branch, perform the task and then submit a PR.
I’ve seen people do this with Claude Code to great success (not in a GH Action). Even multiple sessions concurrently. Token budget is the limit, obviously.
Watched your Youtube. I love this - will try it out and give it to our team. This is effectively the "full mode" version of the mode I currently use Cursor for.
But I do stand by the point. We are seeing umpteen of these things launched every week now, all with the exact same goal in mind; monetizing a thin layer of abstraction between code repos and model providers, to lock enterprises in and sell out as quickly as possible. None of them are proposing anything new or unique above the half dozen open source extensions out there that have gained real community support and are pushing the capabilities forward. Anyone who actually uses agentic coding tools professionally knows that Windsurf is a joke compared to Cline, and that there is no good reason whatsoever for them to have forked. This just poisons the well further for folks who haven't used one yet.
Yes, the high-order bit is to avoid snark, so thanks about that. And it's clear that you know about this space and have good information and thoughts to contribute—great!
I would still push back on this:
> all with the exact same goal in mind
It seems to me that you're assuming too much about other people's intentions, jumping beyond what you can possibly know. When people do that to reduce things to a cynical endstate before they've gotten off the ground, that's not good for discussion or community. This is part of the reason why we have guidelines like these in https://news.ycombinator.com/newsguidelines.html:
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
The time to sell a VSCode fork for 3B was a week ago. If someone wants to move off of VSCode, why would they move to a fork of it instead of to Zed, JetBrains, or a return to the terminal?
Next big sale is going to be something like "Chrome Fork + AI + integrated inter-app MCP". Brave is eh, Arc is being left to die on its own, and Firefox is... doing nothing.
Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
- To what granularity can I limit the context?
[1] https://aider.chat/
Thanks for the feedback. We'll definitely add a feature list. To answer your question, yes - we support Cursor's features (quick edits, agent mode, chat, inline edits, links to files/folders, fast apply, etc) using open source and openly-available models (for example, we haven't trained our own autocomplete model, but you can bring any autocomplete model or "FIM" model).
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
An important Cursor feature that no one else seems to have implemented yet is documentation indexing. You give it a base URL and it crawls and generates embeddings for API documentation, guides, tutorials, specifications, RFCs, etc in a very language agnostic way. That plus an agent tool to do fuzzy or full text search on those same docs would also be nice. Referring to those @docs in the context works really well to ground the LLMs and eliminate API hallucinations
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
[1] https://forum.cursor.com/t/how-does-docs-crawling-work/264/3
The continue.dev plugin for Visual Studio Code provides documentation indexing. You provide a base URL and a tag. The plugin then scrapes the documentation and builds a RAG index. This allows you to use the documentation as context within chat. For example, you could ask @godotengine what is a sprite?
So this is why everything is going behind Anubis then?
Nah, Anubis combats systematic Scraping of the web by data scrapers, not actual user agents.
A scraper in this case is the agent of the user. Doesn't make it not a scraper that can and will get trapped.
Cursor’s doc indexing is acc one of the few AI coding features that feels like it saves time. Embedding full doc sites, deduping nav/header junk, then letting me reference @docs inline actually improves context grounding instead of guessing APIs.
Just use the Context7 MCP ? Actually I'm assuming Void supports MCP.
Context7 is missing lots of info pieces from the repos it indexing and getting overbloated with similar sounding repos, which is becoming confusing for LLM's.
can you elaborate on how context7 handles document indexing or web crawling. If i connect to the mcp server, will it be able to crawl websites fed to it?
Agreed - this is one of the better solutions today.
This is a good point.We've stayed away from documentation assuming that it's more of a browser agent task, and I agree with other commenters that this would make a good MCP integration.
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
I agree that on the face of it this is extremely useful. I tried using it for multiple libraries and it was a complete failure though, it failed to crawl fairly standard mkdocs and sphynx sites. I guess it's better for the 'built in' ones that they've pre-indexed
I use it mostly to index stuff like Rust docs on docs.rs and rendered mdbooks. The RAG is hit or miss but I haven’t had trouble getting things indexed.
Do you support @Docs?
https://docs.cursor.com/context/@-symbols/@-docs
I've used both Cursor and Aider but I've always wanted something simple that I have full control on, if not just to understand how they work. So I made a minimal coding agent (with edit capability) that is fully functional using only seven tools: read, write, diff, browse, command, ask, and think.
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
Have a look at https://github.com/aperoc/toolkami to see if it might be useful for you.
Will check this out. I like to have a bit more control over my stack if possible.
> The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
Am I the only one that has had bad experiences with aider? For me each time I've tried it, I had to wrestle with and beg the AI to do what I wanted it to do, almost always ending in me just taking over and doing it myself.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code 2. Ask a question or give a task 3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
Aider is quite configurable, you need to look at the leaderboard and copy one of the high performing model/config setups. Additionally, you need to autoload files such as the readme and coding guidelines for your project.
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
Never used the tool. But it seems both aider and cursor are not at their strongest out of the box? I read similar thing about cursor and doing custom configuration so it picks up coding guidelines etc etc. Is there some kind of agreed best practice standard that is documented or just try and error best practices from users sharing these?
Aider's leaderboard is a baseline "best practice" for model/edit format/mode selection. Beyond that, it's basically whatever you think are best practices in engineering and code style, which you should capture in documents that can serve double duty both for AI and for human contributors. Given that a lot of this stuff is highly contentious it's really up to you as to pick and choose what you prefer.
That depends on models you use and your prompts.
Use gemini-2.5pro or sonnet3.5/3.7 or gpt-4.1
Be as specific and detailed in your prompts as you can. Include the right context.
and what do you do if you value privacy and don't want to share everything in your project with silicon valley, or you don't want to spend $8/hr to watch Claude do your hobby for you?
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I haven't used local models. I don't have the 60+gb of vram to do so.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
Local models just aren't there yet in terms of being able to host locally on your laptop without extra hardware.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
This is exactly the issue I have with copilot in office. It doesn't learn from my style so I have to be very specific how I want things. At that point it's quicker to just write it myself.
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
I really wonder why dynamic learning hasn't been explored more. It would be a huge moat for the labs (everyone would have to host and dynamically train their own model with a major lab). Seems like it would make the AI way smarter too.
> At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
> or use the "right" prompt. Give some examples.
There's no such thing as a "right prompt". It's all snake oil. https://dmitriid.com/prompting-llms-is-not-engineering
Is this post just you yelling at the wind? What does this have to do with the post you replied to?
It feels like everyone and their mother is building coding agents these days. Curious how this compares to others like Cline, VS Code Copilot's Agent mode, Roo Code, Kilo Code, Zed, etc. Not to mention those that are closed source, CLI based, etc. Any standout features?
Void dev here! The biggest players in AI code today are full IDEs, not just extensions, and we think that's because they simply feel better to use by having more control over the UX.
There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).
Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.
We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.
> and there is no open source full IDE alternative (besides Void).
And Zed: https://zed.dev
Yesterday on the front page of HN:
https://news.ycombinator.com/item?id=43912844
this joke could not have been more perfectly set up if it were staged. thanks for the guffaw.
I should have been more careful with my wording - I was talking about major VS Code-based IDEs as alternatives. Zed is very impressive, and we've been following them since before Void's launch!
And Emacs, also mentioned in that thread (by me, but still).
Maybe I live in a bubble, but it's surprising to me that nobody mentions Jetbrains in all these discussions. Which in my professional working experience are the only IDEs anyone uses :shrug:
I’m not sure I’ve met a Jetbrains user in projects I’ve worked on. It’s a paid product so just has a small userbase.
Here are some numbers on their user base in real number and dollar amounts: https://www.jetbrains.com/lp/annualreport-2024/
Their tools are wildly popular in many spaces. It isn't for everyone though. It's totally believable in your circle no one uses their tools, but it isn't niche.
Funny enough, I know a lot of people who work at JetBrains, but only a few end-users.
Their use base is completely different. And we’re both in a bubble, I reckon. IntelliJ people also only know a few VSCode users!
Pycharm is extremely popular in the data science world. The Community Edition is free and has 99% of the features most people need. Even when developing with Cursor, I find myself going back to Pycharm just to use the debugger, which I greatly prefer to the debugger used in these VS Code forks.
Lately I’ve only tried clion, which has no free version. Personally it didn’t function as well as vscode for C++.
You may have missed it but they did release a free version of clion recently (for personal use). https://news.ycombinator.com/item?id=43914705
> The biggest players in AI code today are full IDEs, not just extensions,
Claude Code (neither IDE nor extension) is rapidly gaining ground, it's biggest current limitation being cost, which is likely to get resolved sooner rather than later (Gemini Code anyone?). You're right about the right now, but with the pace at which things are moving, the trends are honestly more relevant than the status quo.
Just want to share our thinking on terminal-based tools!
We think in 1-2 years people will write code at a systems level, not a function level, and it's not clear to us that you can do that with text. Text-based tools like Claude Code work in our text-based-code systems today, but I think describing algorithms to a computer in the future might involve more diagrams, and terminal will not be ideal. That's our reasoning against building a tool in the terminal, but it clearly works well today, and is the simplest way for the labs to train/run terminal tool-use agents.
Diagrams are great at providing a simplified view of things but they suck ass when it comes to providing details.
There's a reason why fully creating systems from them died 20 years ago - and it wasn't just because the code gen failed. Finding a bug in your spec when its a mess of arrows and connections can be nigh impossible.
Go image search "complex unreal blueprint".
This is completely true, and it's a really common objection.
I don't imagine people will want to fully visualize codebases in a giant unified diagram, but I find it hard to imagine that we won't have digests and overviews that at least stray from plaintext in some way.
I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
> I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
Sounds exactly like what DeepWiki is doing from the Devin AI Agent guys: https://deepwiki.com
Terminals aren't too far away from evolving [0] beyond UTF-8 characters. Therefore I suspect IDEs and CLIs will continue their turf wars as always.
> hard to imagine that we won't have digests and overviews
100% agreed here.
Disclosure: I'm the author of the project below.
[0] https://terminal.click
You have lost all connection to reality.
Hey by the way I hear all communication between people is going to shift to pictograms soon. You know -- emoji and hieroglyphs. Text just isn't ideal, you know
Every system can be translated can be translated to text though. If there is one thing LLMs have essentially always been good at, it is processing written language.
> Claude Code (neither IDE nor extension) is rapidly gaining ground
What makes you say that? From what I’m observing it doesn’t seem to be talked much about at all.
Spending too much time on HN and other spaces (including offline) where people talk about what they're doing. Making LLM-based things has also been my job since pretty much the original release of GPT3.5 which kicked off the whole industry, so I have an excuse.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
[0] https://news.ycombinator.com/item?id=43931409
I personally tried it and I felt it way more confusing to use compared to using Cursor with Claude 3.7 Sonnet. The CLI interface seems to me more to lend itself for «vibe coding» where you actually never work and look with the actual code. That is why I think Cursor and IDEs are more popular than CLI only tools.
Claude Code's announcement earned 2k+ point on HN when it launched (7th most popular HN submission this year).
https://hn.algolia.com/?q=claude+code
Together with 3.7 Sonnet. And the claim was that it is rapidly gaining ground, not that it sparked initial interest. I still don’t see much proof of adoption. This is actually the first I’ve heard about anyone actually actively using it since its launch.
>This is actually the first I’ve heard about anyone actually actively using it
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.
I don't know Claude Code, so if it's "neither IDE nor extension", what is it?
It's a CLI tool
The versioning and git branching sounds really neat, I think! Can you say more about that? Curious if you've looked at/are considering using Jujutsu/JJ[0] in addition or instead of git for this, I've played with it some, but been considering trying it more with new AI coding stuff, it feels like it could be a more natural fit than actually creating explicit commits for every change, while still tracking them all? Just a thought!
[0]https://github.com/jj-vcs/jj
Interesting, thanks for sharing! We planned on spinning up a new Git branch and shallow Git clone (or possibly worktree/something more optimized) for each agent, and also adding a small auto-merge-with-LLM flow, although something more granular like this might feel better. If we don't use a versioning tool like JJ at first (may just use Git for simplicity at first), we will certainly consider it later on, or might end up building our own.
I agree the branching sounds super cool!
If you're open to something CLI-based, my project Plandex[1] offers git-based branching (and granular versioning) for AI coding. It also has a sandbox (also built on git) that keeps cumulative changes separate from project files until they're ready to apply.
1 - https://github.com/plandex-ai/plandex
Isn't continue.dev also open source and not using 'their backend' when sending stuff? I didn't use it in a while, but I know it had support for llama, local models for tab completions, etc.
Continue is doing great work, but they're an extension (plugin)!
What’s wrong with a plugin? I don’t see the benefit of an IDE over a plugin.
The extensions API lets you control the sidebar, but you basically don't have control over anything in the editor. We wouldn't have been able to build our inline edit feature, or our navigation UI if we were an extension.
Continue.dev is an extension and it does inline edits just fine in VS Code and IntelliJ.
Big fan of Continue btw! There's a small difference in how we handle inline edits - if you've used inline edits in Cursor/Windsurf/Void you'll notice that a box appears above the text you are selecting, and you can type inside of it. This isn't possible with VS Code extensions alone (you _have_ to type into the sidebar).
Is inline edits the same as diff edits? In that case I think Cline and Roo can do it as well.
If I understand your question correctly - Cline and Roo both display diffs by using built-in VS Code components, while Cursor/Windsurf/Void have built their own custom UI to display diffs. Very small detail, and just a matter of preference.
It's about whether the tool can edit just a few lines of the file, or whether it needs to stream the whole file every time - in effect, editing the whole file even though the end result may differ by just a few lines.
I think editing just a part of the file is what Roo calls diff editing, and I'm asking if this is what the person above means by line edits.
I think it'd be worthwhile to call out in a FAQ/comparison table specifically how something like an "AI powered IDE" such as Cursor/Void differs from just using an IDE + a full-featured agentic plugin (VS Codium + Cline).
I agree, having used Cline I am not sure what advantages this would offer, but I would like to know (beyond things like “it’s got an open source ide” - Cline has those too specifically because I can use it in my open source ide)
I think it's worth mentioning that the Theia IDE is a fully open source VS Code-compatible IDE (not a fork of VS Code) that's actively adding AI features with a focus on transparency and hackability.
We considered Theia, and even building our own IDE, but obviously VSCode is just the most popular. Theia might be a good play if Microsoft gets more aggressive about VSCode forks, although it's not clear to us that people will be spending their time writing code in 1-2 years. Chances are definitely not 0 that we end up moving away from VSCode as things progress.
It's the most popular because the tech is decades old. You're all rushing to copy obsolete technology. Now we have 10 copies of an obsolete technology.
We used to know better
I mean I guess I should thank the 10 teams who forked VSCode for proving beyond all reasonable doubt that VSCode is architecturally obsolete. I was already trying to make that argument, but the 10 forks do it so much better.
So this is closer to Zed than Cursor/Windsurf/Continue, right?
edit: ahh just saw that it is also a fork of VS Code, so it is indeed OSS Cursor
Yep, Void is a VSCode fork, but we're definitely not wed to VSCode! Building our own IDE/browser-port is not out of the picture. We'll have to see where the next iteration of tool-use agents takes us, but we strongly feel writing typescript/rust/react is not the endgame when describing algorithms to a computer, and a text-based editor might not be ideal in 10 years, or even 2.
openAI chose to acquire windsurf for 3B instead of building something like Void, very curious decision. awesome project, will be closely following this
>> The biggest players in AI code today are full IDEs, not just extensions
Are you sure? I have some expertise with my IDE, some other extension which solve problems for me, a wide range of them, I've learnt shortcuts, troubleshooting, where and who ask for help, but now you're telling me that I am better off leaving all that behind, and it's better for me? ;o
My 2c: I rarely need agent mode. As an older engineer, I usually know what exactly needs to be done and have no problem describing to the LLM what to do to solve what I'm aiming to do. Agent mode seems its more for novice developers who are unsure what tasks need to be broken down and the strategy that they are then solved.
I’m a senior engineer and I find myself using agents all the time. Working on huge codebases or experimenting with different languages and technologies makes everybody “novice”.
Can you give some examples of how you use it? I'm used to asking for very specific things, but less so full on agent mode.
Agent mode seems to be better at realizing all the places in the code base that need to be updated, particularly if the feature touches 5+ files, whereas editor starts to struggle with features that touch 2-3 files. "every 60 ticks, predict which items should get cached based on user direction of travel, then fetch, transform and cache them. when new items need to be drawn, check the cache first and draw from there, otherwise fetch and transform on demand." this touches the core engine, user movement, file operations, graphics etc and agent mode seems to have no problem with this at all.
Personally, I’ve found agents to be a great “multitasking” tool.
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
I think a good use of time while waiting for an LLM is to ask another LLM for something. Until then Slack will do :)
Same here. It’s fine for me to use the ChatGPT web interface and switch between it and my IDE/editor.
Context switching is not the bottleneck. I actually like to go away from the IDE/keyboard to think through problems in a different environment (so a voice version of chatgpt that I can talk to via my smartwatch while walking and see some answers either on my smartglasses or via sound would be ideal… I don’t really need more screen (monitor) time)
> Same here. It’s fine for me to use the ChatGPT web interface and switch between it and my IDE/editor.
I do this all the time, and I am completely fine with it. Sure, I need to pay more attention, but I think it does more good than harm.
I use voice mode of ChatGPT almost exclusively using RayBan metaglasses(especially when outside / cyciling).
Sorry to say but this workflow just isn't great unless you're working on something where AI models aren't that helpful -- obscure language/libraries/etc where they hallucinate or write non-idiomatic solutions if left to run much by themselves. In that case, you want the strong human review loop that comes from crafting the context via copy paste and inspecting the results before copying back.
For well trodden paths that AI is good at, you're wasting a ton of time copying context and lint/typechecking/test results and copying back edits. You could probably double your productivity by having an agentic coding workflow in the background doing stuff that's easy while you manually focus on harder problems, or just managing two agents that are working on easy code.
You would like to, or you're actually doing that right now?
Man that workflow is brutal
20yrs engineer here, all my life I've dreamed of having something that I could ask general questions about a codebase to and get back a cohesive, useful answer. And that future is now.
I would put it more generic. I love that one can now ask as many dumb questions as it takes about anything.
With humans there is this point where even the most patient teacher has to move on to do other things. Learning is best when one is curios about something and curiosity is more often specific. (When generic one can just read the manual)
kind of ironic, because the novices are the ones that absolutely should be doing things by hand to get better at the craft.
The day will come when only a few need to be "better at the craft". Just as with Assembly and even C.
no one writes assembly, but good engineers still understand how things work under the hood
C is still in the top 5 most used languages by any metric.
I dont agree. I use agents all the time. I say exactly what the agent should do but often changes need to be made in more than one place in the code base. Could I prompt it for every change one at a time per file? Sure, but it is faster do prompt an agent for it.
I couldn't use AI code without agentic mode.
At it's most basic, agentic mode is necessary for building the proper context. While I might know the solution at the high level, I need the agent to explore the code base to find things I reference and bring them into context before writing code.
Agentic mode is also super helpful for getting LLMs from "99%" correct code to "100%" correct code. I'll ask them to do something to verify their work. This is often when the agent realizes it hallucinated a method name or used a directionally correct, but wrong column name.
My main interest in agent mode is deputizing the C++ compiler to tell the LLM about everything it has hallucinated.
"Novice mode" has always been true for the newcomer. When I was new, I really was at the mercy of:
1) Authority (whatever a prominent evangelist developer was peddling)
2) The book I was following as a guide
3) The tutorial I was following as a guide
4) The consensus of the crowd at the time
5) Whatever worked (SO, brute force, whatever library, whatever magic)
It took a long ass time before I got to throw all five of those things out (throw the map away). At the moment, #5 on that list is AI (whatever works). It's a Rite of Passage, and because so much of being a developer involves autodidacticism, this is a valley you must go through. Even so, it's pretty cool when you make it out of that valley (you can do whatever you want without any anxiety about is this the right path?). You are never fearful or lost in the valley(s) for the most part afterward.
If you use AI agents for all your work as a novice do you ever make it out of the valley?
Yeah.
Most people have not deployed enough critical code that was mostly written with AI. It's when that stuff breaks, and they have to debug it with AI, that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again. The thing is, we can never not use AI ever again. So, this is the trial by fire where many will figure out the depth of the valley and emerge from it with all the lessons. I can only speculate, but I suspect the lessons will be something along the lines of "some things should use less AI than others".
I think it's a cool journey, best of luck to the AI-first crowd, you will learn lessons the rest of us are not brave enough to embark on. I already have a basket of lessons, so I travel differently through the valley (hint: My ship still has a helm).
> that's when they'll have to contend with the blood, sweat, and tears.
Or, most software will become immutable. You'll just replace it.
You'll throw away the mess, and let a newer LLM build a better version in a couple of days. You ask the LLM to write down the specs for the newer version based on the old code.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Slaves to the system.
Do we really want that? To be beholden to the hands of a few?
Hell, you can't even purchase a GPU with high enough VRAM these days for an acceptable amount of money. In part because of geopolitics. I wonder how many morerestrictions are there to come.
There's a lot of FOMO going around, those honing their programming skills will continue to thrive, and that's a guarantee. Don't become a vassal when you can be a king.
The scenario you paint sounds very implausible for non-trivial applications, but even if it ends up becoming the development paradigm, I doubt anyone will be "left behind" as such. People will have time to re-skill. The question is whether some will ever want to or would prefer to take up woodworking.
Whether one takes up woodworking or not depends on whether or not development was primarily for profit, with little to not intrinsic enjoyment of the role.
Coding and woodworking are similiar from my perspective, they are both creative arts. I like coding in different lanuages, woodworking is simply a physical manifestion of such. In a world where you only need agents, is not a world where nerds will be employed. Traditional nerds cant stand out from the crowd anymore.
This is peak AI, it only goes downhill from here in terms of quality, the AI first flows will be replaceable. Those offshored teams that we have suffered with for years now will be primarly replaced (google first programmers). And developers will continue, working around the edges. The differences will be that startups wont be able to use technology horading to stifle competition, unless they make themselves immune from the ai vacumes.
I can appreciate the comments further up around how AI can help unravel the mysterys of a legacy codebase. Being able to ask questions about code in quick succession will mean that we will feel more confident. AI is lossy, hard to direct, yet very confident always. We have 10k line functions in our legacy code that nests and nest. How confident are you to let ai go and refactor this code without oversight and ship to a customer? Thus far im not, maybe i dont know the best model and tools to use and how to apply them, but even if one of those logic branches gets hallucinated im in for a very bumpy ride. Watching non-technical people at my org get frusted and stuck with it in a loop is a lot more common then the successes which seem to be the opposite of the experienced engineers who use it as a tool, not a savour. But every situation is different.
If you think you company can be a differentiator in the market because it has access to the same AI tool as every other company? We'll well see about that. I believe there has to be more.
Im an experienced engineer of 30+yrs. Technology comes and goes. AI is just a another tool in the chest. I use it primarily because i dont have to deal with ads. I also use it to be a electrical engineer, designing circuts in areas i am not familiar with. I can see very simply the noivce side of the coin, it feels like you have super powers because you just dont know enough about the subject to be aware of anything else. Its sped up the learning cycle considerably beacause of the conversational nature. After a few years of projects, i know how to ask better questions to get better results.
> Or, most software will become immutable. You'll just replace it.
The joys of dependency hell combined with rapid deprecation of the underlying tooling.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Not even, devoured might be more apt. If I'm manually moving through this valley and a flood is coming through, those who are sticking automatic propellers and navigation systems on their ship are going to be the ones that can surf the flood and come out of the valley. We don't know, this is literally the adventure. I'm personally on the side of a hybrid approach. It's fun as hell, best of luck to everyone.
It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean? These are risks we all take.
Quoting the Admiral from Starcraft Broodwars cinematic (I'm a learned person):
"... You must go into this with both eyes open"
> It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean?
Not sure if you drew the right conclusion from that one.
Considering that Agent Mode saves me a lot of hassle doing refactoring ("move the handler to file X and review imports", "check where this constant is used and replace it with <another> for <these cases>", etc.), I'd say you are missing the point...
I actually flip things - I do the breakdown myself in a SPEC.md file and then have the agent work through it. Markdown checklists work great, and the agent can usually update/expand them as it goes.
I think this perspective is better characterized as “solo” and not “old”. I don’t think your age is relevant here.
Senior engineers are not necessarily old but have the experience to delegate manageable tasks to peers including juniors and collaborate with stakeholders. They’re part of an organization by definition. They’re senior to their peers in terms of experience or knowledge, not age.
Agentic AIs slot into this pattern easily.
If you are a solo dev you may not find this valuable. If you are a senior then you probably do.
One benefit is when working on multiple code bases where the size of the code base is larger than the time spent working on it, so there is still a gap of knowledge. Agents don't guarantee the correctness of a search the same an old search field does, but it offers a much more expressive way to do searches and queries in a code base.
Now that I think about it, I might have only ever used agents for searching and answering questions, not for producing code. Perhaps I don't trust the AI to build a good enough structure, so while I'll use AI, it is one file at a time sort of interaction where I see every change it makes. I should probably try out one of these agent based models for a throw away project just to get more anecdotes to base my opinion on.
Coding agents are the future and it's anyone's game right now.
The main reason I think there is such a proliferation is it's not clear what the best interface to coding agents will be. Is it in Slack and Linear? Is it on the CLI? Is it a web interface with a code editor? Is it VS Code or Zed?
Just like everyone has their favored IDE, in a few years time, I think everyone will have their favored interaction pattern for coding agents.
Product managers might like Devin because they don't need to setup an environment. Software engineers might still prefer Cursor because they want to edit the code and run tests on their own.
Cursor has a concept of a shadow workspace and I think we're going to see this across all coding agents. You kick off an async task in whatever IDE you use and it presents the results of the agent in an easy to review way a bit later.
As for Void, I think being open source is valuable on it's own. My understanding is Microsoft could enforce license restrictions at some point down the road to make Cursor difficult to use with certain extensions.
Another YC backed open source VS Code is Continue: https://www.continue.dev/
(Caveat: I am a YC founder building in this space: https://www.engines.dev/)
When can we expect a release from Engines?
> It feels like everyone and their mother is building coding agents these days.
For real. I think it's because code editors seem to be in that perfect intersection of:
- A tool for programmers. Programmers like building for programmers.
- A tool for productivity. Companies will pay for productivity.
- A tool that's clearly AI-able. VC's will invest in AI tools.
- A tool with plenty of open source lift. The clear, common (and extreme?) example of this being forking VSCode.
Add to that the recent purchase of VSCode-fork [1] Windsurf for $3 billion [2] and I suspect we will see many more of these.
[1]: https://windsurf.com/blog/why-we-built-windsurf#:~:text=This...
[2]: https://community.openai.com/t/openai-is-acquiring-windsurf-...
Void has been around since last year.
I'm working on an agnostic unified framework to make contexts transferrable between these tools.
This will permit zero friction, zero interruption transitions without any code modification.
Should have something to show by next week.
Hit me up if you're interested in working on this problem - I'm tired of cowboying my projects.
I've tried many of AI coding IDE's, the best ones like RooCode are good simply because they don't gimp your context. The modern day models are already more then capable enough for many coding tasks, you just need to leave them alone and let them utilize their full context window and all will go well. If you hear a bad experience with any of these IDE's, most of the time its because its limiting use of context or improper management of related functions.
You forgot the best one to compare against - Claude Code.
We think terminal tools like Claude Code are a good way for research teams to experiment with tool use (obviously pure text), but definitely don't see the terminal as the endgame for these tools.
I know some folks like using the terminal, but if you like Claude Code you should consider plugging your API key into Void and using Claude there! Same exact model and provider and price, but with a UI around the tool calls, checkpoints, etc.
I’m a traditional millennial brat - high preference for UI as well.
That is until I started using Claude Code.
It’s not about the terminal. It’s just a better UX in general.
The difference is this one is backed by Y Combinator.
That doesn't really narrow it down much, YC has backed so many AI coding tools that they've started inbreeding. PearAI (YC Fall '24) is a fork of Continue (YC Summer '23).
https://techcrunch.com/2024/09/30/y-combinator-is-being-crit...
Does this mean "open source" is really "market capture before becoming closed-source"?
One of the founders here - Void will always remain open source! There are plenty of examples of an open source alternative finding its own niche (eg Supabase, Mattermost) and we don't see this being any different.
Are any of those examples vc backed?
https://news.ycombinator.com/item?id=23319901
https://news.ycombinator.com/item?id=43763225
I've been at many open source meetups with YC founders and can tell you that this is not the thinking at all. Rather, the emphasis is on finding a good carve-line between the open source offering and the (eventual) paid one, so that both sides are viable and can thrive.
Most common these days is to make the paid product be a hosted version of the open source software, but there are other ways too. Experienced founders emphasize to new startups how important it is to get this right and to keep your open source community happy.
No one I've heard is treating open source like a bait and switch; quite the opposite. What is sought is a win-win where each component (open source and paid) does better because of the other.
Exactly right.
I think there’s a general misconception out there that open sourcing will cannibalize your hosted product business if you make it too easy to run. But in practice, there’s not a lot of overlap between people who want to self-host and people who want cloud. Most people who want cloud still want it even if they can self-host with a single command.
The weird thing is, the biggest reason I don't use Cursor much is because they just distribute this AppImage, which doesn't install or add itself to the Ubuntu app menu, it just sits there and I have to do
and then I get greeted with an error message: I have to go Googling, then realize I have to run it with Often I'm lazy to do all of this and just use the Claude / ChatGPT web version and paste code back and forth to VS code.The effort required to start Cursor is the reason I don't use it much. VS code is an actual, bona fide installed app with an icon that sits on my screen, I just click it to launch it. So much easier. Even if I have to write code manually.
AppImageLauncher improves the AppImage experience a lot, including making sure they get added to the menu. I'm not sure if it makes launching without the sandbox easier or not.
[flagged]
Not only did you mess up the formatting, but you pasted a very lengthy code, generated by an LLM. Perhaps consider using a pastebin in the future, if at all.
Yup - honestly the space is so open right now still, everyone is trying haha. It's got quite hard to keep track of different models and their strengths / weaknesses, much less the IDE and editor space! I have no idea which of these AI editors would suite me best and a new one comes out like every day.
I'm still in vim with copilot and know I'm missing out. Anyway I'm also adding to the problem as I've got my own too (don't we all?!), at https://codeplusequalsai.com. Coded in vim 'cause I can't decide on an editor!
This is cool! I like that you have a visual element for the agent working on multiple tickets at a time.
Thanks! And yeah, it really is satisfying watching the tickets move from column to column "all on their own" as the works gets done!
There's so much happening in this space, but I still haven't seen what would be the killer feature for me: dual-mode operation in IDE and CLI.
In a project where I already have a lot of linting brought into the editor, I want to be able to reuse that linting in a headless mode: start something at the CLI, then hop into the IDE when it says it's done or needs help. I'd be able to see the conversation up to that point and the agent would be able to see my linting errors before I start using it in the IDE. For a large, existing codebase that will require a lot of guardrails for an agent to be successful, it's disheartening to imagine splitting customization efforts between separate CLI and IDE tools.
For me so far, cursor's still the state of the art. But it's hard to go all-in on it if I'll also have to go all-in on a CLI system in parallel. Do any of the tools that are coming out have the kind of dual-mode operation I'm interested in? There's so many it's hard to even evaluate them all.
I posted this the other day, but didn't get a response:
Does anyone think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? For instance, would an AI art tool with sculpting and drawing benefit from being open source?
I've talked with VCs that love open source developer tools, but they seem to hate on the idea of "open creative tools" for designers, illustrators, filmmakers, and other creatives. They say these folks don't benefit from open source. I don't quite get it, because Blender and Krita have millions of users. (ComfyUI is already kind of in that space, it's just not very user-friendly.)
Why do investors seem to want non-developer things to be closed source? Are they right?
I think it’s mostly a value capture thing. More money to be made hooking devs in then broke creatives and failing studios (no offense, it just seems like creatives are getting crushed right now). In one case you’re building for the tech ecosystem, in the other for the arts. VC will favor tech, higher multiples. Closed source is more protected from theft etc in many cases.
But as you point out there are great solutions so it’s clearly not a dead end path.
Zed (https://zed.dev/agentic) also released agentic code edits (similar to Cursor) which I tried and really like.
Its agent is a lot worse than Cursor's in my experience so far. Even tab edits feel worse.
My understanding is that these are not custom models but a combination of prompting and steering. That makes Cursor's performance relative to others pretty surprising to me. Are they just making more requests? I wonder what the secret sauce is.
And it's not yet another editor running in a web browser which is really, really nice.
Clearly. Zed wouldn't have blurry text rendering on some resolutions if they did.
Issue has been open for over a year.
https://github.com/zed-industries/zed/issues/7992
I just wish it had a fallback if Vulcan wasn’t installed… I would love to run Zed inside a docker container so that naughty plugins don’t misbehave
Doesn't it need to download a massive nodejs binary to be useful?
For what?
I dunno, you tell me? https://github.com/zed-industries/zed/issues/12589
Right. “To be useful” includes leveraging language servers, which they aren’t going to write from scratch. So what’s your point?
Just for syntax highlighting no. But for any ide like features it need language servers and some of them are node based.
https://zed.dev/docs/languages
Its fast. Love it !
One thing I noticed is that there's no cost tracking, so it's very hard to predict how much you're spending. This is fine on tools like Cursor that are all inclusive, but is something that is really necessary if you're bringing your own API keys.
Is this feature on the roadmap?
This is a great suggestion. We're actually storing the input/output costs of most models, but aren't computing cost estimates yet. Definitely something to add. My only hesitation is that token-based cost estimates may not be accurate (most models do not provide their tokenizers, so you have to eg. estimate the average number of characters per token in order to compute the cost, and this may vary per model).
It'd probably be useful to just show cost after the fact based on the usage returned from the API. Even if I don't know how much my first request will cost, if I know my last request cost x cents then I can probably have a good idea from there.
This is very cool and I'm always happy to see more competition in this space. That said, two suggestions:
- The logo looks like it was inspired directly from the Cursor logo and modified slightly. I would suggest changing it.
- It might be wise to brand yourself as your own thing, not just an "open source Cursor". I tend to have the expectation that "open source [X]" projects are worse than "[X]". Probably unfair, I know.
Thanks for the suggestions - these issues have been a bit painful for us, and we will probably fix them in the next major update to Void.
Believe it or not, the logo similarity was actually unintentional, though I imagine there was subconscious bias at play (we created ours trying to illustrate "a slice of the Void").
Maybe the icon is a piece of cake with a sphere void in it? Trying to play on how easy it is - ‘it’s a piece of cake’
A minor counterpoint, I personally like the "open source Xyz" because I instantly know what the product is supposed to do. It's also very SEO friendly because you don't know the name of the open source version before you find it, so you can Kagi/Google/DDG "open source Cursor" and get it as a top result, instead of a sea of spammy slime.
> I personally like the "open source Xyz" because I instantly know what the product is supposed to do.
But that assumes that you're already familiar with the non-open-source software referenced. I've never used Cursor so I have no idea what it can or can't do. I'm pretty sure I would never have discovered Inkscape if it had consistently been described as an “open-source Illustrator” as I've simply never used Adobe software.
I mostly use Cursor for the monthly flat pricing which allows me unlimited (slow) calls to most LLMs (Gemini 2.5 Pro, Claude 3.7, etc) without worrying about spending anything more than $20/month.
Is there some benefit from forking vscode instead of creating an extension?
Void dev here! As others have mentioned, VSCode strongly limits the functionality that you can build as an extension. A few things we've built that aren't supported as an extension: - the Accept|Reject UI and UX - Cmd+K - Control over the terminal and tabs - Custom autocomplete - Smaller things like ability to open/close the sidebar, onboarding, etc
It's been a lot harder to build an IDE than an extension, but we think having full control over the IDE (whether that's VSCode or something else we build in the future) will be important in the long run, especially when the next iteration of tool-use LLMs comes out (having native control over Git, the UI/UX around switching between iterations, etc).
the Accept|Reject UI and UX , Continue as a VS Code extension also seems to manage this
>Smaller things like ability to open/close the sidebar
Are you sure about this one? I'm sure I have used an extension whose whole purpose was to automatically open or close the sidebar under certain conditions.
As an (ex) VSCode extension developer, VSCode really does lock down what you can do as an extension. It's well intentioned and likely led to the success of VSCode, but it's not great if you want to build entirely new UI interactions. For instance, something like the cmd-k inline generation UI in Cursor is basically impossible as a VSCode extension.
Maybe someone should just fork VS Code in a more liberal way, then everyone can build their extensions on top of that.
The restrictive extension ecosystem was a big part of VSCode's success. You can compare to Atom, which allowed extensions to do whatever they wanted: Atom ended up feeling exceptionally slow and bloated because extensions had full latitude to grind your IDE to a halt.
Yeah, I don't have a problem with that!
But since there seems to be a need for AI-powered forks of VS Code, it could make sense for them all to build off the same fork, rather than making their own.
Isn’t that what they’re doing by building off of vscode?
Yup, just rebased about a week ago
Ask Firefox how that went.
Hint: they dropped XUL because every update broke extensions
That's not comparable though, right? I'm suggesting a third party forks the main product.
Worse. The result is the same: an unmaintainable product, and now you also become increasingly incompatible with the source.
People keep saying that the extensions API is too limited or something, but Cline seems to manage fine with being an extension.
Eclipse Theia can host VSCode extensions, but it also has its own extension mechanism that offers more customization, it could be a viable alternative: https://theia-ide.org/docs/extensions/
You're right that extensions do manage fine - the main differences right now are UX improvements (many of them are mentioned above). I can see the differences compounding at some point which is why we're focused on the full IDE side.
They've been changing that recently
One of the big _disadvantages_ is that it prevents access to the VSCode-licensed plugins, such as the good C# LSP (seems EEE isn't completely dead). That's something to pay attention to if you're considering a fork and use an affected language.
Since these products supposedly make developers 1000x more productive it should be no problem to just re-implement those proprietary MS plugins from scratch. Right? Any volunteers...?
MS will be tuning Copilot to the point it’s the best agent for C#, for sure. It might take a little longer ofc. But Nadella mentioned to Zuck in a fireside chat that they are not happy with C# support in LLMs and that they are working on this.
C# language server, being Roslyn Language Server, is plugin agnostic, it's MIT licensed and is essentially a part of the compiler: https://github.com/dotnet/roslyn/tree/main/src/LanguageServe....
Did you mean to say a debugger? That one has an open alternative (NetCoreDbg) alongside a C# extension fork which uses it (it's also what VS Codium would install). It's also what you'd use via DAP with Neovim, Emacs, etc.
Nope, OmniSharp - which uses that AFAIK, is what I'm referring to. Something being open source doesn't automatically make it good.
Omnisharp is what the base C# extension used previously. It has been replaced by Roslyn LS (although can be switched to back still). You are talking about something you have no up-to-date knowledge of.
I wish all these companies the best and I understand why they’re forking, but personally I really don’t want my main IDE maintained by a startup, especially as a fork. I use Cursor, and I’ve run into a number of bugs at the IDE level that have nothing to do with the AI features. I imagine this is only going to get worse over time.
It gets them more slop funding if they can say they have an “AI IDE”.
well you got downvoted, but it's not wrong: funding attractiveness of an extension (Cline) vs your "own IDE".
Of course I got downvoted (but it’s gone back to four now) because this is HN, where somehow a group of otherwise seemingly intelligent people are all patting themselves on the back about the latest Y Combinator AI slop funding.
I've just installed it and tried to have it create a hello world using gemma3:27b-it-qat through ollama but it refused to do it claiming it doesn't have access to my filesystem.
Then I opened an existing file and asked it to modify a function to return a fixed value and it did the same.
I'm an absolute newb in this space so if I'm doing something stupid I'd appreciate it if you helped me correct it because I already had the C/C++ extension complain that it can only be used in "proper vscode" (I imported my settings from vscode using the wizard) and when this didn't work either it didn't spark joy as Marie Kondo would say.
Please don't get me wrong, I gave this a try because I like the idea of having a proper local open source IDE where I can run my own models (even if it's slower) and have control over my data. I'm genuinely interested in making this work.
Thanks!
Thanks for writing! Can you try mentioning the file with "@"? Smaller models sometimes don't realize that they should look for files and folders, but "@" always gives the full context of whatever is in the file/folder directly to them.
Small OSS models are going to get better at this when there's more of a focus on tool-use, which we're expecting in the next iteration of models.
That's what happen? Also happens with cascade base when using windsurf, need a ./ Prefix for it to get the file
Something I was thinking — if Microsoft keeps locking things down for forks (which they sorta are), I wonder if the Void devs would ever pivot to forking other editors like Zed, or if they’re just gonna keep charging headfirst into the wave.
May I ask why did you decide against starting with (Eclipse) Theia instead of VSCode?
It's compatible but has better integration and modularity, and doing so might insulate you a bit from your rather large competitor controlling your destiny.
Or is the exit to be bought by Microsoft? By OpenAI? And thus to more closely integrate?
If you're open-source but derivative, can they not simply steal your ideas? Or will your value depend on having a lasting hold on your customers?
I'm really happy there are full-fledged IDE alternatives, but I find the hub-and-spoke model where VSCode/MS is the only decider of integration patterns is a real problem. LSP has been a race to the bottom, feature-wise, though it really simplified IDE support for small languages.
Related:
Show HN: Void, an open-source Cursor/GitHub Copilot alternative - https://news.ycombinator.com/item?id=41563958 - Sept 2024 (154 comments)
Not sure if this feedback is useful but I personally tried Void this morning for about 10 mins on a flutter project (after connecting all the various extensions and keys, which was completely painless).
However, I uninstalled due to the sounds it made! A constant clicking for some (unannounced) background work is bizarre choice for any serious development environment.
As others have mentioned please add more docs / details to the README
I want to mention my current frustration with cursor recently and why I would love an OSS alternative that gives me control; I feel cursor has dumped agentic capabilities everywhere, regardless of whether the user wants it or not. When I use the Ask function as opposed to Agent, it seems to still be functioning in an agentic loop. It takes longer to have basic conversations about high level ideas and really kills my experience.
I hope void doesn’t become an agent dumping ground where this behavior is thrust upon the user as much as possible
Not to say I dislike agent mode, but I like to choose when I use it.
Given that there's a dozen agentic coding IDEs, I only use Cursor because of the few features they have like auto-identification of the next cursor location (I find myself hitting tab-tab-tab-tab a lot, it speeds up repetitive edits). Are there any other IDEs that implement these QOL features, including Void (given it touts itself specifically as a Cursor alternative)?
I think QOL will shift away from your keyboard. Give Claude Code a try and you’ll understand what I mean. Developer UX will shift away from traditional IDEs. At this point I could use notepad for the the type of manual work I do vs how I orchestrate Claude Code.
The reason I have never bothered with Claude Code (or even other agentic tools), is that I still code mostly by hand.
When I am using LLMs, I know exactly what the code should be and just am using it as a way to produce it faster (my Cursor rules are extremely extensive and focused on my personal architecture and code style, and I share them across all my personal projects), rather than producing a whole feature. When I try and use just the agent in Cursor, it always needs significant modifications and reorganization to meet my standards, even with the extensive rules I have set up.
Cursor appeals to me because those QOL features don't take away the actual code writing part, but instead augment it and get rid of some of the tedium.
[flagged]
Given Void is backed by Ycombinator, what’s the business plan to start generating revenue?
There is no plan with YC in this space, everything is just basically vibe investing and hoping something sticks.
Continue.dev also received investment from YC. Remember PearAI? Very charismatic founders that just forked Continue.dev and got a YC investment [1].
https://techcrunch.com/2024/12/20/after-causing-outrage-on-t...
This is a good question. Because we're open source, we will always allow you to host models locally for free, or use your own API key. This makes monetization a bit difficult in the short term. As with many devtool companies, the long-term value comes from enterprise sales.
We need an Eval Leaderboard for LLM assisted Agentic IDEs. The space is getting crowded:
New Editors:
- Firebase Studio
- Zed
- OpenHands (OSS Devin Clone)
VS Code Forks:
- Cursor
- Windsurf Editor
- Void
VS Code Extensions:
- Gemini Code Assist
- Continue.dev
- GitHub Copilot Agent Mode
- Cline
- RooCode
- Kilo Code (RooCode + Cline Fork)
- Windsurf Plugin
- Kodu.ai Claude Coder (not claude code!)
Terminal Agents:
- Aider
- Claude Code
- OpenAI codex
Issue Fixing Agents:
- SWE-agent
Missing OpenAI Codex cli
Also missing a class of non-IDE desktop apps like 16x Prompt and Repo Prompt.
Thanks. I added codex.
Though, since I specifically mentioned agentic, I wanted to exclude non-agentic tools like prompt builders and context managers that you linked. :)
Reason being: my idea of agents is to generalize well enough, so the need for workflow based apps isn't needed anymore.
During discovery and planning phase, the agents should traverse the code base with a retrieval strategy packaged as a tool (embedded search, code-graphs, ...) and then add that new knowledge to the plan before executing the code changes.
I don't think it's a black and white distinction between agentic and non-agentic tools. Not to mention tools are constantly evolving and changing.
For example, Cursor a year ago was not agentic at all. GitHub Copilot only recently added agentic features.
I also think the end game for an agentic tool would not an IDE, because IDE was designed for human workflows, not agents.
I wrote about this topic a while ago and made a classification that's probably a bit outdated, but still relevant: https://prompt.16x.engineer/blog/ai-coding-l1-l5
There is also Trae - vscode fork from bytedance
JetBrains Junie
I think it's really interesting that Void (and Zed) are both much more tastefully designed than Cursor, Windsurf or VSCode (though I wouldn't have expected VSCode to be well designed)
As a data scientist, my main gripe with all these AI-centric IDEs is that they don’t provide data centric tools for exploring complex data structures inherent to data science. AI cannot tell me about my data, only my code.
I’ll be sticking with VSCode until:
- Notebooks are first class objects. I develop Python packages but notebooks are essential for scratch work in data centric workflows
- I can explore at least 2D data structures interactively (including parquet). The Data Wrangler in VSCode is great for this
I saw recently a framework that was interacting directly with notebook. But I forgot what was it. Everyday there is a new thing.
I wonder why most agentic patterns don't use multiple different retrieval strategies simultaneously and why most of them don't use CodeGraph 1 during discovery phase. Embeddings aren't enough, Agent induced function/class name search isn't enough.
[1] CodeGraph https://arxiv.org/abs/2408.13863
Side note: is there an AI app that sets up a full initial SaaS app based on prompts? I'm struggling to get something like Cursor to behave correctly.
Not exactly what you're asking but: have you tried some Boilerplate SaaS thing?
What products are in this space?
There are tons. You can look up "saas boilerplate $your_stack".
vibe coding oriented builders, where you draft an app idea and it gives you a prototype?
I'd say Firebase Studio and OpenHands
What we need is not an editor. We need a coding agent server which we can use from any editor we want.
There's aider (https://aider.chat) or Claude Code (https://docs.anthropic.com/en/docs/claude-code/overview) or Codex (https://github.com/openai/codex) or plandex (https://github.com/plandex-ai/plandex) or kwaak (https://github.com/bosun-ai/kwaak)
I'd venture to say there's more of these than there are UI Editors tbh.
A trajectory question: do we still have the debate that whether open-source software takes away SDE jobs or makes the pie grow bigger to create more jobs? The booming OS community in the past seem have created multiple billion-dollar markets. On the other hand, we have a lot less growth than before now, and I was wondering if OSS has started to suppressing the demand of SDEs.
How is it that the open source Cursor 'alternative' doesn't have a linux option (either via AppImage, as Cursor offers, or something like a flatpak). I understand that open source does not automatically mean linux, but it is like, weird right?
just looked and it does https://github.com/voideditor/binaries/releases
AppImage, .deb, .tar.gz
Aw sweet, I only checked their website download page and missed that. Thank you!
No, not really. It's not really your place to dictate what someone enjoys working on.
That's a bit aggressive. Who hurt you?
Projects like this are great because open source versions need to figure out the right way to do things, rather than the hacky, closed, proprietary alternatives that pop up first and are just trying to consume as many users as possible to get a most quickly.
In that case, a shitty, closed system is good actually because it's another thing your users will need to "give up" if they move to an alternative. By contrast, an open ide like void will hopefully make headway on an open interface between ides and the llm agents in such a way that it can be adapted by neovim people like me or anyone else for that matter
Been following this project from very earlier on. It's awesome to see how much ground you covered in just a few months!
I've got a great setup going with Emacs and Aidermacs[1]. I just can't stand using VS Code, it's impossible to configure to my liking.
[1]: https://github.com/MatthewZMD/aidermacs
what sorts of things are hard to configure?
Emacs' configurability is hard to describe to anyone who hasn't immersed themselves in that sort of environment. There's a small portion of the program written C, but the bulk of it is written in elisp. When you evaluate elisp code, you're not in some sandboxed extension system - you're at the same level as Emacs itself. This allows you to modify nearly any aspect of Emacs.
It'd be a security nightmare if it was more popular, but fortunately the community hovers around being big enough for serious work to be done but small enough that it's not worth writing malware for.
I don't know if it's a security nightmare any more than other editors that have "plugins" (or the like).
One advantage for Emacs is it's both easy and common read the code of the plugins you are using. I can't tell you the last time I looked at the source code of a plugin for VS Code or any other editor. The last time I looked at the code for a plugin in Emacs was today.
That last line was totally unexpected, yet deeply familiar. Took over a minute to recover from that rofl.
I don't think it's a security nightmare per-se. Most of the time, you're not installing a lot of packages (the built-in are extensive) and most of these are small and commonly used.
It's like saying the AUR is a security nightmare. You're just expected to be an adult and vet what you're using.
I'm not sure I agree with the number and size of packages people install (unless you're comparing them to, say, org-mode), but that's not really what I'm talking about.
Emacs runs all elisp code as if it's part of Emacs. Think about what Emacs is capable of, and compare that to what a browser allows its extensions to do. No widely used software works like that because it's way too easy to abuse. Emacs gets away with it because it's not widely used.
I don't know the first thing about VSCode but I'm willing to bet there are strict limits to what its plugins are allowed to do.
I don't know if that's changed since last I wrote an extension for a web browser, but the API is pretty open for the current context (tab) that it's executing in. As long as it's part of the API, the action is doable. Same with VSCode or Sublime. Sandboxed plugins would be pretty useless.
I guess it's hard to switch from a working setup that you've invested time in.
Especially since you might not be familiar with the new one.
Personally, I'm trying out things in VS Code, just to see how they work. But when I need to work, I do it in Emacs, since I know it better.
Also, with VS Code, just while trying it out, simple things like cut & paste would stop working (choosing them from the menu, they would work, but trying to cut & paste with the key shortcuts and the mouse, wouldn't). You'd have to refresh the whole view or restart it, for cut & paste to become available again.
My setup: vim -> ctrl + z -> claude -> ctrl + c -> fg
it's a shame vim is so stinky because after 15 years of using it now i find myself using vscode. I always like vim because editing is efficient. Now I dont write as much as supervise a lot of the boilerplate code.
Over the years I have gotten better with vim, added phpactor and other tooling, but frankly i dont have time to futz and its not so polished. With VSCode I can just work. I don't love everything about it, but it works well enough with copilot i forgot the benefits of vim.
I get your experience, but for me using vim is perfect for code exploration. The only needed plugins are fzf.vim and vinegar. The first for fuzzy navigation and the second for quickly browsing the current directory.
LSP experience with VSCode may be superior, but if I truly needed that, I would get an IDE and have proper intellisense. The LSP in Vim and Emacs is more than enough for my basic requirements, which are auto-imports, basic linting, and autocomplete to avoid misspellings. VSCode lacks Vim's agility and Emacs's powerful text tooling, and do far worse on integration.
good luck copy and pasting with vim with tmux in the mix
Skill issue on your part
:w !clip.exe
it works?
On a tangent, I get the feeling that the more senior you are, the less likely you are to end up using one of these VIDEs. If you do use any coding assistants at all, it will mostly be for the auto-complete feature - no 'agent mode' malarkey.
Would you say this is true?
Maybe it's just me, but the auto-complete is very distracting and something I avoid. Most of the time I'm fighting it, deleting or denying it's suggestions, and it throws me out of flow.
From what I've seen, most senior/staff-level engineers are working for big corps which have limited contracts with providers like Github Copilot, which until recently only gave access to autocomplete.
I prefer the web-based interface. It feels like my choice to reference a tool. It's easy to run multiple chats at once, running prompts against multiple models.
That's very interesting. This is certainly what I was doing before Copilot. Now I let it autocomplete but only sometimes when it makes sense. I guess I am used to the keybinds so that I can undo if I don't like it.
When I was reading your comment I thought that there is a space for an out-of-flow coding assistant, i.e. rather than deploy an entire IDE with extension, the assistant can be just a floating window (I guess chatgpt does that) and is able to dive in and out or just suggest as you type along.
When browsing a GitHub repo, there's an option for "assistive chat" with copilot. -- I've found this a useful interface to get the LLM to answer quick questions about the repository without having to dig through myself.
Beyond autocomplete, I've found the LLM to be useful in some cases: sometimes you'll want to make edits which are quite automatic, but can't quite be covered by refactoring tools, and are a bit more involved than search/replace. LLMs can handle that quite well.
For unfamiliar domains, it's also useful to be able to ask an LLM to troubleshoot / identify problems in code.
Can you use OpenRouter with this?
Yes, you can bring OpenRouter or any other provider and connect directly! (We don't route your messages through a backend like others).
Yes at onboarding asks for gemini and openrouter keys
and pay 5% commission? no thanks.
There's also https://aihubmix.com/ that does it at cost
Also litellm
Doesn't seem to be at cost, there's a markup on the input and output tokens. At least for their Anthropic endpoints.
Google ones appear up be at cost. Maybe they do markup on specific ones.
If you're all in on Claude then yeah, just go direct. Going through a proxy of silly
Well it's not silly if the proxy has higher rate limits.
Early when sonnet 3.5 was the best coding model lots of people used them because of the rate limits on anthropic's own API. So that's a plus. There's also the ease of use, one key for every model out there, and you get to choose providers for things like deepseek / qwen / llama if those suit your needs.
That doesn’t sound bad, for extra redundancy?
Can you tell me what's the difference between this and Continue?
This is realy cool and checks my privacy boxes, great name too. I will be testing it out and will consider contributing.
One thing i'd really like to have is a manual order for folders or even files in the explorer view.
I subscribed to the mailing list of void long ago to be notified once the alpha opens, but i've never recieved anything. I forgot about it until today.
We've been holding off on this until Void is out of Beta.
Any particular reason why they forked VSCode and not Theia?
Nobody uses Theia, relatively speaking.
This sounds like exactly the kind of thing Theia would be useful for. It's easier than a straight up fork of VSCode.
Really interesting from a 'in a bubble' point of view. I've been using Void for the past few weeks as a replacement for Bolt, Lovable, Tempo and the rest. Which is nothing like the use cases mentioned in this thread. Just shows how we're each focused on different parts of an environment? Of course I'm not a programmer, I'm just a slash-and-hack vibe coder. :)
For the record, I really like Void. It's great at utilising local models, which no-one else does. Although I'd love to know which are the best Ollama local coding models. I've failed with a few so the moment I'm sticking to Sonnet 3.7 and GPT 4.1. With 03 as the 'big daddy'. :)
I'm also a fan because it's open source, which is really needed in this space I feel. One question for the devs, what do you think about this? https://blog.kilocode.ai/p/vs-code-forks-are-facing-a-grim
Disappointing name! You are colliding with https://voidlinux.org/ among probably many other much more significant pieces of software.
Would be helpful for you to be in homebrew.
If this isn’t as good as or better than Avante.nvim, I’m going to be quite sad.
The irony of an open source alternative to a fork of an open source project is hopefully not lost here
fwiw here's a more structured/organized version of this thread: https://extraakt.com/extraakts/void-ai-coding-tool-discussio...
"Structured," eh?
Anyway, I didn't know what your service was trying to do so I clicked on the homepage, clicked Sources to see what else was there, it cited <https://extraakt.com/extraakts?source=reddit#:~:text=Open-so...> but the hyperlink took me back to the HN thing, which defeats the purpose of having a source picker
If this supports connecting with locally hosted models this is actually a HUGE deal
We gotta stop making full vscodes, and just make extensions... Bleh
I bet OpenAI feels kinda silly now that they just paid $3B for Windsurf when they could have backed an OSS one for much less.
Windsurf probably has recurring revenue.
The branding looks quiet strange and very conflicting with voidzero.dev
Oh wow this is nice, will try it out.
Ycombinator backed too I guess Vibe coding is here to stay
> Ycombinator backed too
Oh wow, I didn't even realize. Substantially less appealing of a project to me now.
I just gave zed a try after trying it a few months ago and it's come along way, not a bad non-cursor chat AI IDE. Its tracking mode is pretty cool
Make it disable all the telemetry by default
Nice - also open competition is always good for users!
And just that much maot each AI wrapper has.
Congrats void!!
Here have 10B$
Open source and not available on Linux?
Yeah this surprised me as well lol
one of these is gonna have malware and we'll wonder how we never saw it
Has anyone had success running it on NixOS? I have an account with deepinfra which I'd like to try with this.
Sadly when I try to add a model, I get the error: > Error while encrypting the text provided to safeStorage.encryptString. Encryption is not available
vscode and Cursor work perfectly fine this way:
> nix-shell -p appimage-run > [nix-shell:~/Downloads]$ appimage-run Cursor-0.49.6-x86_64.AppImage
I don't know anything about the project, I use Zed editor, but I think the logo is really cool.
https://github.com/yetone/avante.nvim/ is another choice.
BYOK
We live in the age of dev tools
Mandatory reminder that "agentic coding" works way worse than just using the LLM directly, steering it ad needed, filling the gaps, and so forth. The value is in the semantical capabilities of the LLM itself: wrapping it will make it more convenient to use, but always less powerful.
I beg to disagree, Salvatore... Have a go at VS Code with Agent mode turned on (you'll need a paid plan to use Claude and/or Gemini, I think). It gets me out of vim, so yeah, it's that good. :)
Tip: Write a SPEC.md file first. Ask the LLM to ask _you_ about what might be missing from the spec, and once you "agree" ask it to update the SPEC.md or create a TODO.md.
Then iterate on the code. Ask it to implement one of the features, hand-tune and ask it to check things against the SPEC.md and update both files as needed.
Works great for me, especially when refactoring--the SPEC.md grounds Claude enough for it not to go off-road. Gemini is a bit more random...
Interacting with the LLM directly is more work. What I mean is that a wrapper in the best conditions will not damage too much the quality of the LLM itself. In the chat, you continuously avoid suboptimal choices executed by the model, if you are a very experienced code, and a fix here, a fix there, you continuously avoid local minima. After a few iterations you find that your whole project is a lot better designed than otherwise.
Oh, sure. I don't rely on the LLM to write all the code. I do "trust" it to refactor things to a shape I want, and to automate chores like moving chunks of code around or build a quick wrapper for a library.
One thing I particularly don't like about LLMs is that their Python code feels like Java cosplay (full of pointless classes and methods), so my SPEC.md usually has a section on code style right at the start :)
> Welcome to Void.
Flaskbacks to that pretty good Voyager episode.
I’ve been using Claude coder, I like this exp way more than these ai ides way more.
Another one? People saw that 3B windsurf money.
Here comes another. Everyone saw that 3B Windsurf money and production go brrr
Void actually launched before the Windsurf IDE existed!
http://news.ycombinator.com/item?id=42127882
https://news.ycombinator.com/item?id=41563958
VSCode death by a thousand forks.
they always start as open source to bait users. how long until this one also turns into BaitWare? I hope it won't since it's backed by Y Combinator and has an Apache 2 license.
(Edit: the parent comment was edited to add "I hope it won't since it's backed by Y Combinator and has an Apache 2 license." - that's a good redirection, and I probably wouldn't have posted a mod reply if the comment had had that originally.)
(Btw if your comment already has replies, it is good to add "Edit:" or something if you're changing it in a way that will alter the context of replies.)
---
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Don't be curmudgeonly."
https://news.ycombinator.com/newsguidelines.html
They first need to substantially grow the user base as we saw with OpenWebUI, only then make an Enterprise offering and switch the license from one day to another.
Wait open web-ui has changed license?
Yes, they've modified the licence to require preserving their branding. I guess it's an anti-fork measure, you'd have to infringe either on their licence or on their trademark.
https://news.ycombinator.com/item?id=43901575
>how long until this one also turns into BaitWare?
>VSCode Fork.
Already did. Can't wait to hear their super special very important reason why this can't exist as an extension.
"Don't be snarky."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
The best reason I‘ve seen mentioned by the founder in this thread, is showing/hiding panels, and the onboarding flow. Those are things you can’t do with a plugin. I personally also like Cursor‘s diff view way better than Continue‘s, and maybe that’s because a fork gives more control there.
no linux version? pfft. rookies
If I move off Cursor, it's def not going to be to another vs-code derivative. Zed has it right - build it from the ground up, otherwise, MS is going to kneecap you at some point.
Zed didn't build from the ground up though. I mean, they did for a lot of stuff, but crucially they decided to rely on the LSP ecosystem so most of the investment in improving Zed is also a direct investment in improving VSCode.
If you can't invest in yourself without making the same size investment in your competitor, you probably have no path to actually win out over that competitor.
The threat is Msft cutting you off from the ecosystem. That means growing an ecosystem and not merely the editor.
Great to see more people thinking this way, finally. Would be even better to see the same change wrt typescript, another MS trojan horse.
100%. Microsoft can't interfere with Zed.
Additionally, Zed is written in Rust and has robust hardware-accelerated rendering. This has a tangible feel that other editors do not. It feels just so smooth, unlike clunky heavyweight JetBrains products. And it feels solid and sturdy, unlike VS Code buffers, which feel like creaky webviews.
I created an OSS ai coding platform as well: https://brokk.ai
But it's a different take, Brokk is built to let humans supervise AI more effectively rather than optimizing for humans reading and writing code by hand. So it's not a VS Code fork, it's not really an IDE in the traditional sense at all.
Intro video with demo here: https://www.youtube.com/watch?v=Pw92v-uN5xI
Can it run in a GitHub action ?
What I want is to be able to do is.
1. Create a branch called TaskForLLM_123 2. Add a file with text instructions called Instructions_TaskForLLM_123.txt 3. Have a GitHub action read the branch, perform the task and then submit a PR.
I’ve seen people do this with Claude Code to great success (not in a GH Action). Even multiple sessions concurrently. Token budget is the limit, obviously.
worktree + pr support is coming soon, in the meantime you gotta do it manually
Watched your Youtube. I love this - will try it out and give it to our team. This is effectively the "full mode" version of the mode I currently use Cursor for.
Sweet! I'd love to hear how it goes, hmu on https://discord.gg/QjhQDK8kAj
Yet another vscode fork…
[flagged]
Would you please stop posting like this?
We're trying for thoughtful, respectful discussion of people's work on this site. Snarky, nasty oneliners destroy that.
We detached this subthread from https://news.ycombinator.com/item?id=43928512.
Apologies for the snark, you're correct.
But I do stand by the point. We are seeing umpteen of these things launched every week now, all with the exact same goal in mind; monetizing a thin layer of abstraction between code repos and model providers, to lock enterprises in and sell out as quickly as possible. None of them are proposing anything new or unique above the half dozen open source extensions out there that have gained real community support and are pushing the capabilities forward. Anyone who actually uses agentic coding tools professionally knows that Windsurf is a joke compared to Cline, and that there is no good reason whatsoever for them to have forked. This just poisons the well further for folks who haven't used one yet.
Yes, the high-order bit is to avoid snark, so thanks about that. And it's clear that you know about this space and have good information and thoughts to contribute—great!
I would still push back on this:
> all with the exact same goal in mind
It seems to me that you're assuming too much about other people's intentions, jumping beyond what you can possibly know. When people do that to reduce things to a cynical endstate before they've gotten off the ground, that's not good for discussion or community. This is part of the reason why we have guidelines like these in https://news.ycombinator.com/newsguidelines.html:
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
I'll take my paddling. Thanks for the work you do here btw, truthfully.
[dead]
The time to sell a VSCode fork for 3B was a week ago. If someone wants to move off of VSCode, why would they move to a fork of it instead of to Zed, JetBrains, or a return to the terminal?
Next big sale is going to be something like "Chrome Fork + AI + integrated inter-app MCP". Brave is eh, Arc is being left to die on its own, and Firefox is... doing nothing.
Why no linux build? It is just vscode in ts right? And it is an electron app right?
We do have a linux build! (the link is at the bottom of the download page). Some systems are a bit finicky so we give more options on setting it up.
I gave it a go. Was having issues installing the AppImage, which is my preferred method, but extracting the .tar.gz (or the .deb) eventually worked.
There's a link for Linux at the bottom of the download page, which directs to the releases in GitHub
Presumably you could raise a PR...?
https://github.com/voideditor/void#readme