Cursor 1.7
cursor.com112 points by mustaphah 4 hours ago
112 points by mustaphah 4 hours ago
I got into Cursor a little late, went really heavy on it, and see myself using it less and less as I go back to VSCode.
1) The most useful thing about Cursor was always state management of agent edits: Being able roll back to previous states after some edits with the click of a button, or reapply changes, and preview edits, etc. But weirdly, it seems like they never recognized this differentiator, and indeed it remains a bit buggy, and some crucial things (like mass-reapply after a rollback) never got implemented.
2) Adding autocomplete to the prompt box gives me suspicion they somehow still do not understand best practices in using AI to write code. It is more crucial than ever to be clear in your mind what you want to do in a codebase, so that you can recognize when AI is deviating from that path. Giving the LLM more and earlier opportunities to create deviation is a terrible idea.
3) Claude Code was fine in CLI and has a nearly-identical extension pane now too. For the same price, I seem to get just as much usage, in addition to a Claude subscription.
I think Cursor will lose because models were never their advantage and they do not seem to really be thought leaders on LLM-driven software development.
(I work at Cursor)
1. Checkpoints/rollbacks are still a focus for us, albeit it's less used for those working with git. Could you share the bug you saw?
2. Autocomplete for prompts was something we were skeptical of as well, but found it really useful internally to save time completing filenames of open code files, or tabbing to automatically include a recently opened file into the context. Goal here is to save you keystrokes. It doesn't use an LLM to generate the autocomplete.
3. A lot of folks don't want to juggle three AI subscriptions for coding and have found the Cursor sub where they can use GPT, Claude, Gemini, Grok models to be a nice balance. YMMV of course!
Since we have cursor people joining, let me bring up my constant problems around applying code changes. For background, I mostly work with "chat":
1. The apply button does not appear. This used to be mostly a problem with Gemini 2.5 Pro and GPT-5 but now sometimes happens with all models. Very annoying because I have to apply manually
2. Cursor doesn't recognize which file to apply changes to and just uses the currently open file. Also very annoying and impossible to change the file to which I want to apply changes after they were applied to one file.
> (I work at Cursor)
I find the amount of credits included in the pro subscription per month totally insufficient. Maybe it lasts 1-2 weeks.
Today I got a message telling me I exhausted my subscription when the web dashboard was showing 450/500. Is there a team level constraint on top of individual ones?
Addressing 2) first: That's good, I totally misunderstood then, and guess I'll need to try it to understand what's new since I thought that kind of tabbing had been there a while.
Back to 1): The type of bug I see most often is where conversation history seems incomplete, and I have trouble rolling back to or even finding a previous point that I am certain existed.
Git shares some features but I think Git was not made for the type of conversational rapid-prototyping LLMs enable. I don't want to be making commits every edit in some kind of parallel-git-state. Cursor's rollback and branching conversations make it easy to backup if a given chat goes down the wrong path. Reapply is tedious since it has to be done one edit at a time - would be nice if you could roll-forward.
I haven't put much thought into what else would be useful, but in general the most value I get from Cursor is simplifying the complex state of branching conversations.
FWIW, my workflow with git is to stage changes I want to keep after every prompt. Then I can discard changes in the working area after a bad prompt or stage individual changes before discarding from the working area. Works really nice for me.
Yeah I've used staging/stashing similarly, but it feels like working around Cursor rather than with it
I tried vanilla vs code again three weeks ago. The tab complete was so slow and gave such poor results that I had to crawl back to Cursor after not even a full week. Cursor is sooo fast and useful in comparison.
> Claude Code was fine in CLI and has a nearly-identical extension pane now too.
One of us is wrong here. Last I checked, the extension pane was a command line, that doesn't use macOS keybindings, reimplements common controls, uses monospaced text for prose, etc.
I don't mind particularly about the last two but 'cmd A' on my Mac highlight all the text in the Claude Code user interface, rather than the text in the text box, is annoying.
Cursor seems to give me access to a lot of models for a single fee. I would love to just pay for Claude and maybe ChatGPT or Grok, but it seems like that's more expensive than Cursor.
I totally agree on point 1. Being able to make piece wise, limited updates with AI was a sweet spot, but they keep pushing towards “ai changes hundreds of lines across dozens of files” type edits. I bought into cursor heavily from the off, and I’ve seen it create auth bypasses, duplicate component libraries and break ORM models. I know what I want to do, I just want it to happen faster and in a way I can control and direct, and that’s not the direction cursor seems to to be going.
I’ve actually gone back to neovim, copying in snippets from ChatGPT. I don’t think I’ve given up anything in speed.
I do find the agents useful sometimes, but Cursor is most useful when it gives me visibility and granular control over what the agents are doing. Trying to follow a verbose narrative through a narrow chat pane is not it.
What I hate about Cursor is that - even when I have credits left - it still hangs for a full minute before starting to respond. Not always, but often enough. You don't know what you're buying, you might get fast response, you might get a tortoise.
VScode has a bit of a history now of quickly deprecating competitors who innovate in this space. It already has good options for code completion, AI chat bots, and more features on the horizon. I'm not sure what cursors moat is. Seems to me like Microsoft could easily implement any new feature cursor comes up with.
For me, the best kind of "moat" (tbh I hate that word, since it specifically implies needing to design (...scheme...) and engineer some kind of user lock-in, which is inherently user-hostile) would be staying aggressively on the forefront of DX. More important than feature churn, making it polished and seamless and keeping a smile on my face as I work is the best kind of "moat."
It requires constant attention and vigilance, but that's better for everyone than having some kind of "moat" that lets them start coasting or worse— lets them start diverting focus to features that are relevant for their enterprise sales team but not for developers using the software.
Companies really should have to stay competitive on features and developer happiness. A moat by definition is anti-competitive.
Cursor was good for a little while until VSCode opened up the APIs for AI editing. Now Copilot is really good and other extensions (specifically Kilo Code) are doing things so much better!
I am seeing a lot of folks talking about maintaining a good "Agent Loop" for doing larger tasks. It seems like Kilo Code has figured it out completely for me. Using the Orchestrator mode I'm able to accomplish really big and complex tasks without having to design an agent loop or hand crafting context. It switches between modes and accomplishes the tasks. My AGENTS.md file is really minimal like "write test for changes and make small commits"
I feel like I've hit a sweet spot for my use case, but am so behind the times. I've been a developer for 20 years and I'm not interested in vibe coding or letting an agent run wild on my full code base.
Instead, I'll ask Cursor to refactor code that I know is inefficient. Abstract repetitive code into functions or includes. Recommend (but not make) changes to larger code blocks or modules to make them better. Occasionally, I'll have it author new functionality.
What I find is, Cursor's autocomplete pairs really with with the agent's context. So, even if I only ask it for suggestions and tell it to not make the change, when I start implementing those changes myself (either some or all), the shared context kicks in and autocomplete starts providing suggestions in the direction of the recommendation.
However, at any time I can change course and Cursor picks up very quickly on my new direction and the autocomplete shifts with me.
It's so powerful when I'm leading it to where I know I want to go, but having enormous amounts of training data at the ready to guide me in best-practices or common patterns.
I don't run any .md files though. I wonder what I'm missing out on.
Abstraction for abstraction sake is usually bad. What you should aim for is aligning it to the domain so that feature change requests are proportional to the work that needs to be done. Small changes, small PRs.
Did something change with Kiro, or was I just using it wrong? I tried to have it make a simple MCP server based on docs, and it seriously spent 6 hours without making a basic MVP. It looked like the most impressive planner and executor while working, but it just made a mess.
Cursor will soon be irrelevant. The one thing that it excels at is autocomplete, and that's ironically the one feature they bought out and integrated (Supermaven) instead of developing themselves, which sums it up quite well. It's still making good money based on having been first to market , but it's market share has been in freefall for ages, only accelerating.
The agentic side is nothing special and it's expensive for what you get. Even if you're the exact target audience - don't want CLI, want multiple frontier models to choose from for a fixed monthly price - Augment is both more competent and ends up cheaper.
Then for everyone else who is fine with a single model, Claude Code and now Codex are obviously better choices. Or those who want cheaper and faster through open weights models, there's Opencode and Kilo.
The mystery is that the other VC backed ones seemingly don't care or just don't put enough resources into cracking the autocomplete code, as many are still staying with Cursor purely for that - or were until CC became mainstream. Windsurf was making strides but now that's dead.
> and that's ironically the one feature they bought out and integrated (Supermaven) instead of developing themselves
What? Cursor bought Supermaven last November and I have been using their much superior (compared to GH Copilot) completion since maybe early last year so it does not add up.
Do not underestimate enterprise customers who buy Cursor for all their employees. Cursor will become legacy tech soon, yes. But it will be slow death, not a crash.
I use Cursor because I found their autocomplete to be the best option at the time. That seemed to be the consensus at one point too from bits of research I did.
Do people think there are better autocomplete options available now? Is it a case of just using a particular model for autocomplete in whatever IDE you want to use?
Right. Yesterday I tried a simple task that just adds Required[] notation to all class fields. After making the change on one field, Cursor allows me to press tabs and update all other fields. VSCode doesn't understand what I was trying to do after the first operation, which is surprisingly bad (no improvement after months). Also I'm not in favor of the conversational experience of claude code or other CLIs for such trivial task. I'd be happy to know what else can provide a better user experience than Cursor.
Disclaimer: I get enterprise level subscriptions to these services via my employer. I personally don't pay for them and never consider their cost, if that matters.
Same! Just tried Copilot heavily for a few days and the autocomplete is terribly slow and clunky.
Overall I do like VSCode better but Cursors blazing fast and intelligent autocomplete is awesome, will probably stick with Cursor.
Btw, I find the review / agent code stuff pretty bad on both. No idea how people get them working well.
I agree, they bought out Supermaven which was amazing, but now Supermaven is dead and I want something in Rider.
I don't think Supermaven is dead...
Even though it is also part of Cursor, you could subscribe to the $10/month Pro plan and use it in Jetbrains IDEs like Rider.
today i found myself wondering why Github Copilot is so retarded without any autocomplete whatsover in the agent input
For some reason, CLIs feel better as coding agent UIs. I loved Cursor at first, but now with Claude Code, it feels like Cursor's UI gets in the way.
The reason is abundantly clear. Cursor was just a GPT wrapper with a nice UI/UX (which was very nice when it came out) it has some other models like autocomplete as well, but its still a wrapper. OpenAI and Anthropic build and train models specifically to work via CLI driven processes, which is why they are so much better now. Cursor is basically dead as I'm sure they realized they get much better performance with the CLI/agentic approach.
> OpenAI and Anthropic build and train models specifically to work via CLI driven processes,
Cursor agents open terminals just fine in VSCode and is a major part of how Cursor works.
I personally code in VSCode text editor prior to Cursor (left VIM a while ago) and prefer to stay in the context of a desktop text editor. I find it's easier to see what's changing in real time, with a file list, file tabs, top level and inline undo buttons etc.
I've even stopped tabbing to a separate terminal by about 50%, I learned to use VSCode terminals to run tests and git commands, which works well once you learn the shortcuts + integrate it into some VSCode test runner extensions. Plus Cursor added LLM/autocomplete to terminal commands which is great. I don't need a separate CLI tool or Bash/zsh script inside terminal to inject terminal commands I forgot the arguments for.
(I work at Cursor)
We also have a CLI, if you prefer coding in the terminal. We've seen this useful for folks using JetBrains or other IDEs: https://cursor.com/cli
I'd love in-terminal autocomplete support with Vim or Helix. I can't stand VSCode but cursor autocomplete is 95% of my cursor usage.
I could be wrong about this, but it feels like Cursor is less and less compelling with better models and better CLI tools popping up. Are the plan limits generous enough that it's worth a spin?
Again, I haven't used Cursor in a while, I'm mostly posting this hoping for Cunningham's Law to take effect :)
Cursor is your best option if you want to switch models frequently, run multiple agents in parallel, and also have the best tab complete out there. And you're still getting extra vc-funded tokens. You get ~$40 worth of tokens at API costs for the $20 plan.
idk seems worth it to me. If youre shelling out on one of the $200 plans maybe its not as worth it, but it just seems like the best all in one ai product out there.
Except for the autocomplete, it's not the best option even for the user you're describing.
I find Cursor at the same level as Claude code, with some strengths and some weaknesses. Cursor is nice when I want to start multiple parallel agents, while browsing files, monitoring the progress, and switching models as needed. It’s just a simple, zero config environment i can just start using intuitively.
Claude code is more reliable and generally better at using MCP for tool cal, like docs from contex7. So if I had only one prompt and it HAD to make something work, Claude code would be my bet.
Personally I like jumping between models and IDEs , if only to mix it up. And you get a reminder of different ways of doing stuff.
I tried Claude Code once and half an hour later it printed $10 cost. I thought I was using the pro subscription, not the API. This makes using CC dangerous, so I am avoiding it.
I'm currently flying, and using Cursor. I have my model set to Sonnet-4, and it keeps bugging me that my usage is going to end on 10/21, 10/19, 10/13, 10/08, after just a couple hours of VERY slow LLM usage.
I wouldn't even bother with it, but my MCP coding tool I built uses Claud Desktop and is for windows only, and my laptop is MacOS. So I'm using Cursor, and it is WAY WORSE than my most simple of MCP servers (that literally just does dotnet commands, filesystem commands, and github commands).
I think having something that is so general like cursor causes the editor to try too many things that are outside what you actually want.
I fought for 2 hours and 45 minutes while Sonnet-4 (which is what my MCP uses) kept inventing worse ways to implement OpenAI Responses using the OpenAI-dotnet library. Even switching to GPT-5 didn't help. Adding the documentation didn't help. I went to claude in my browser, pasted the documentation, and my class I wanted extended to use Responses, and it finished it in 5 minutes.
The Cursor "special-sauce" seems to be a hinderance now-days. But beggars can't be choosers, as they say.
As everyone with half a brain predicted, their pricing was never meant to last. Their "limit" (base plan) is now just $20 in API credits, at slightly higher than provider token price. Sometimes they let you go a little over, but I'm not sure if that's still true.
I wish cursor would let you see how much usage in terms of $$ you have done for your month. Its really hard to see in the dashboard the individual charges tokens, but then there is no cumulative. I haven't been able to find a way to see how much of my included usage is being used besides downloading the csv and manually summing. They just give you a very unhelpful "You will use your included credits by X date"
I suppose this is by design so you don't know how much you have left and will need to buy more credits.
(I work at Cursor)
We added usage visibility in the IDE with v1.4: https://cursor.com/changelog/1-4#usage-and-pricing-visibilit.... By default, it only shows when you are close to your limits. You can toggle it to always display in your settings, if you prefer.
You mean this page? https://cursor.com/dashboard?tab=usage
It doesn't show cumulative does it? For me it just shows my plan name and itemized token usage but no "You used x out of y of your included credits"
It shows me $XX/$YYY at the very top, on the right side. /shrug
Thats on demand usage. Not your plan usage. You get Y credits every month before you start using on demand usage. That $XX/$YYY is how much of your on demand usage limit you used.
OK I'd love to know more about how they implemented this: https://cursor.com/changelog/1-7#sandboxed-terminals
"Commands now execute in a secure, sandboxed environment. If you’re on allowlist mode, non-allowlisted commands will automatically run in a sandbox with read/write access to your workspace and no internet access."
I only use VSCode Copilot in Agent mode with either Claude Sonnet 4 or GPT5. Am I missing out on anything?
I'd love to hear from folks who mainly use Claude Code on why they prefer it and how they compare. It seems to be the most popular option here in HN, or at least the most frequently mentioned, and I never quite got why.
I always preferred the deep IDE integration that Cursor offers. I do use AI extensively for coding, but as a tool in the toolbox, it's not always the best in every context, and I see myself often switching between vibe coding and regular coding, with various levels of hand-holding. And I do also like having access to other AI providers, I have used various Claude models quite a lot, but they are not the be-all-end-all. I often got better results with o3 and now GPT-5 Thinking, even if they are slower, it's good to be able to switch and test.
I always felt that the UX of tools like Claude Code encourage you to blindly do everything through AI, it's not as seamless to dig-in and take more control when it makes sense to do so. That being said, they are very similar now, they all constantly copy each other. I suppose for many it's just inertia as well, simply about which one they tried first and what they are subscribed to, to an extent that is the case for me too.
It's primarily the simplicity with which I can work on multiple things. Claude code is also very good with using tools and stuff like that in the background so I just use a browser MCP and it does stuff by itself. I hook it up to staging bigquery and it uses test data. I don't need to see all these things. I want to look at a diff, polish it up in my IDE, and then git commit. The intermediate stuff is not that interesting to me.
This suddenly reminded me that I have a Cursor subscription so I'm going to drop it.
But of course if someone says that Cursor's flow suddenly 2x'd in speed or quality, I would switch to it. I do like having the agent tool be model hotpluggable so we're not stuck on someone's model because their agent is better, but in the end CC is good at both things and codex is similar enough that I'm fine with it. But I have little loyalty here.
I don't think we are in a phase where we can confidently state that there's a correct answer on how to do development, productivity self reports are notoriously unreliable.
At least personally, the reason why I prefer CLI tools like Claude and Codex is precisely that they feel like yet another tool in my toolbox, more so than with AI integrated in the editor. As a matter of fact I dislike almost all AI integrations and Claude Code was when AI really "clicked" for me. I'd rather start a session on a fresh branch, work on something else while I wait for the task to be done, and then look at the diff with git difftool or IDE-integrated equivalent. I'd argue you have just as much control with this workflow!
A final note on the models: I'm a fan of Claude models, but I have to begrudgingly admit that gpt-5-codex high is very good. I wouldn't have subscribed just for the gpt-5 family, but Codex is worth it.
Maximalists who find value in "deep IDE integration" and go on about it also enjoy meetings.
I think personally I really like Claude, but our company has standardized on Cursor. Both are very good. I do like the tab completion. The "accept/undo" flow of cursor is really annoying for me. I get why its there, but it just seems like a secondary on top of Git. I usually get everything in a completely committed state so I can already see all my changes through the standard git management features of "VSCode".
I think Claude's latest VSCode plugin is really great, and it does make me question why Cursor decided to fork instead of make a plugin. I'd rather have it be a plugin so I don't have to wipe out my entire Python extension stack.
I've been using this editor more and to be honest, I love it so much. Yet to try Claude code though.
Since finding out a Claude Code extension to run on VS Code/cursor, I use it less and less. With git and Claude Code, rolling back and forth is a breeze. Cursor is cooked, as the cool kids say now days. They need to adapt and find a moat.
None of my experiences with cursor lately would ever give me confidence for letting it do a task that took long enough for it to be backgrounded.
Caught Claude 4.5 via Cursor yesterday trying to set a password to “password” on an outward facing EC2 service.
Curious what your case was for using Claude to set passwords on EC2 instances. Terraform, CDK, something else?
I really don't understand Cursor's 30 billion dollars valuation (half of Antropic). I use it, it's not a bad tool by any means but it's so buggy from version to version. Latest bug I had, it completely stopped keeping my zsh state in the terminal, I had to downgrade. And honestly, I'm not that sure what secret sauce is worth 30 billion dollars? The agent loop? Others can do that? The autocomplete?
Yeah I have to imagine it's because of user base, there's no real moat in the technology
not only that but the way that openai and claude have their own foundational models/agents trained to work via CLI, which will basically always be better than just cursors gpt wrapper approach.
Was spending a lot with cursor switching between sonnet and opus 4.1s like 1500 to $2k a month. Was doing a lot of tabs in parallel of course. Output was like 5k lines on Good day. (Lines not the best measurement) But a yard stick against feature testing and rework.
Now with gpt-5-codex and codex vs code ext .. getting through up to 20k line changes in a day again lots of parallel jobs; but codex allows for less rework.
The job of the "engineer" has changed a lot. At 5k lines I was not reviewing every detail but it was possible to skim over what had changed. At 20k it's more looking at logs performance / arch & observation of features less code is reviewed.
Maybe soon just looking at outcomes. Things are moving quickly.
Sounds like a different use case than Cursor. Editing that many files/lines probably scales better with a CLI tool. Cursor makes more sense for day-to-day maintenance and heavy hand-holding feature development coding.
If I was building a new project from scratch I'd probably use a CLI tool to manage a longer TODO easier. But working on existing legacy code I find an IDE integration is more flexible.
Nice to see the image files being read without having to paste them and team rules. Cursor has been extremely helpful the last few months but increasingly more expensive. I spent almost 300 last month and had a lot of frustrating experiences so now I’m transitioning to Claude code in VS code.
$300!? You could literally switch to any of the dozen competitors and it'd be cheaper than that for at least the same quality, good god.
Like many others I was very pro cursor a year or so ago, but unfortunately since then 3 significant things have severely impacted its appeal:
VS Code accepted the challenge and upped its game.
Claude Code changed the game.
Cursor's own heavy value decrease (always part of the strategy but poorly communicated and managed) hit Cursor users hard when the cheap premium tokens honeymoon ended in recent months.
Existing users are disappointed, potential new users no longer see it as the clear class leader, because it isn't.
with Codex and Claude Code there is no reason to use Cursor
Anyone have good recommendations for plugins integrating things like LM Studio or Ollama into Visual Studio or Jetbrains IDEs? I'd like to do more local AI processing on code bases instead of always relying on outside providers, but a lot of these things like Copilot and Cursor seem so well integrated into the IDE.
Copilot in VSCode supports local models through Ollama as well. Not sure about Copilot in Visual Studio. That's one of the most annoying things is VS is always behind VSCode in terms of Copilot features.
JetBrains native AI assistant supports Ollama out of the box. No need for a 3rd party plugin anymore.
See https://www.jetbrains.com/help/ai-assistant/use-custom-model...
Off topic, but does anyone understand why Apple’s reader mode is so bad? This post is an example of reader mode not displaying section titles. I see this pretty frequently, even in my own blog, and haven’t been able to figure out hires to beat its flawed logic.
Why waste precious milliseconds typing complete sentences to your AI coding assistant? With autocomplete in the prompt box, we've solved the most pressing problem facing developers today: prompt fatigue.
Gone are the days of exhausting yourself by typing full requests like "refactor this function to use async/await." Now, simply type "refac—" and let our AI predict that you want an AI to refactor your code.
It's AI all the way down, baby.
You write like that's a bad thing. What's with the negativity here..
>> What's with the negativity here
The builders are quietly learning the tools, adopting new practices and building stuff. Everyone else is busy criticizing the tech for its shortcomings and imperfections.
Look, I use AI regularly. I value AI.
It's not a criticism of AI, broadly, it's commentary on a feature designed to make engineers (and increasingly non-engineers) even lazier about one of the main points of leverage in making AI useful.
Autocomplete is one of Cursor's most popular features, and is cited as the only reason some people continue to use it. And you're mocking the Cursor team for adding it to the one place where devs still type a lot of text, and making a value judgment by calling it lazy.
It’s obviously farcical.
Anyone seriously using these tools knows that context engineering and detailed specific prompting is the way to be effective with agent coding.
Just take it to the extreme and youll see; what if you auto complete from a single word? A single character?
The system youre using is increasingly generating some random output instead of what you were either a) trying to do, or b) told to do.
Its funny because its like, “How can we make vibe coding even worse?”
“…I know, lets just generate random code from random prompts”
There have been multiple recent posts about how to direct agents using a combination of planning step, context summary/packing, etc to craft detailed prompts that agents can effectively action on large code bases.
…or yeah, just hit tab and go make a coffee. Yolo.
This could have been a killer feature about using a research step to enhance a user prompt and turn it into a super prompt; but it isnt.
What’s wrong with autocompleting the prompt? There exists entropy even in the English language and especially in the prompts we feed to the llms. If I write something like “fix the ab..” and it autocompletes to AbstractBeanFactory based on the context, isn’t it useful?
> adding it to the one place where devs still type a lot of text
Because that's where the text the devs type still matters most.
Do I care significantly about this feature's existence, and find it an affront to humanity? No.
But, people who find themselves using auto-complete to make even their prompts for them will absolutely be disintermediated, so I think it wise to ensure people understand that by making funny jokes about it.
You come across as smug but there really is value in this. Let’s get rid of autocorrect in ChatGPT while we are at it? Same logic right?
[flagged]
It's funny to imagine this AI based autocomplete prompting when the interface isn't a keyboard but a brain chip. Effectively mind control.
It's already been here for a long time actually. Think google search auto completion of prompts. You're looking for something that might have biases on either side, and you are only shown autocomplete entries for a specific bias.
You’re absolutely correct! This comment was more negative than it could be. Would you like me to rewrite it to demonstrate more positivity?
Swipe right if you vibe with AI suggestion, swipe left if not.
With Meta's wristband, you can save some finger and arm movement as well.
No joke—out of all tech products announced in the last ~year, that wristband is what excites me the most.
> It's AI all the way down, baby.
This brings up an interesting point that's often missed, IMO. LLMs are one of the few things that work on many layers, such that once you have a layer that works, you can always add another abstraction layer on top. So yes, you could very well have a prompt that "builds prompts" that "builds prompts" that ... So something like "do x with best practices in mind" can turn into something pretty complex and "correct" down the line of a few prompt loops.
I hate writing prompt starters for AI, I wish I had a tool that automatically started sentences so that my AI could autocomplete it
lowkey typing is so cumbersome though they should make an ai model that can read my thoughts and generate a prompt from them so i don't have to anymore
You are thinking too small. AI should be able to determine what my thoughts should be and execute them so I don't have to spend my precious time actually thinking.
[dead]