Using AI
These are some points/ways that I've observed to work well in case of using AI in certain projects of mine. These are in regards to using AI in certain contexts.
The best way for letting an llm perform actions, is by limiting what they can do in the first place. This is essentially what MCPs are, but since I dislike using confusing terms, it's basically giving the model a JSON schema specification, and letting a platform interpret that schema, transforming it into some action. JSON schemas, in certain cases, may not be feasible: they are nested and long, something that eats up token usage. This is where I (in a personal use case), decided to go for a small DSL. A simple transpiler to interpret it and emit python code or error messages. A much better and effective approach then asking it to generate raw python.
Being detail oriented. This is hard to do, as it takes.a significant amount of our energy, but is crucial. Every button's alignment, every component's behaviour and how it writes code itself should be done in your way. Dictate everything, every library, every folder structure and every design pattern. Ambiguity is for unserious projects, wherein you do not care much and/or simply want to explore stuff, but as soon as you put your mind to completing it, our opinion and decision should be sprinkled about the codebase.
This is a classic but, higher quality input gives you high quality output. This applies to how you prompt (like simply asking it to write a types file, or use Zod everytime, etc ...), what files/images you give it and, in case of RAG, how you structure your data (whatever can be put into relational/no-sql DBs should be there, and the rest in graph DB. This is obviously not for one-off use cases). This is also where serious usage of CLAUDE.md or AGENTS.md file comes in, and creating the scaffolding for a project itself also goes a long way to tell the model how you want your project to be structured.