The best engineers I know delete more code than they write Junior engineers add features. Senior engineers remove complexity Every line of code you write is a liability. It needs to be maintained. It can break. It adds cognitive load to anyone who reads it later The best pull requests I've seen in the last year? Half of them deleted more than they added. Someone refactored three classes into one. Someone replaced 200 lines of custom logic with a library function. Someone removed an entire abstraction layer that wasn't pulling its weight Deletion is a skill. You have to know what's safe to remove. You have to understand the system well enough to see what's redundant, over-engineered, or just wrong Next time you open a file, ask: What can I remove? The best code is the code you don't write
Best Programming Practices for Clean Code
Explore top LinkedIn content from expert professionals.
-
-
After 2,000+ hours using Claude Code across real production codebases, I can tell you the thing that separates reliable from unreliable isn't the model, the prompt, or even the task complexity. It's context management. About 80% of the coding agent failures I see trace back to poor context - either too much noise, the wrong information loaded at the wrong time, or context that's drifted from the actual state of the codebase. Even with a 1M token window, Chroma's research shows that performance degrades as context grows. More tokens is not always better. I built the WISC framework (inspired by Anthropic's research) to handle this systematically. Four strategy areas: W - Write (externalize your agent's memory) - Git log as long-term memory with standardized commit messages - Plan in one session, implement in a fresh one - Progress files and handoffs for cross-session state I - Isolate (keep your main context clean) - Subagents for research (90.2% improvement per Anthropic's data) - Scout pattern to preview docs before committing them to main context S - Select (just in time, not just in case) - Global rules (always loaded) - On-demand context for specific code areas - Skills with progressive disclosure - Prime commands for live codebase exploration C - Compress (only when you have to) - Handoffs for custom session summaries - /compact with targeted summarization instructions These work on any codebase. Not just greenfield side projects! I've applied this on enterprise codebases spanning multiple repositories, and the reliability improvement is consistent. I also just published a YouTube video going over the WISC framework in a lot more detail. Very value packed! Check it out here: https://lnkd.in/ggxxepik
-
The 10 Rules NASA Swears By to Write Bulletproof Code: 0. Restrict to simple control flow ↳ No goto, setjmp, longjmp, or recursion. Keep it linear and predictable. This ensures your code is easily verifiable and avoids infinite loops or unpredictable behavior. 1. Fixed loop bounds ↳ Every loop must have a statically provable upper bound. No infinite loops unless explicitly required (e.g., schedulers). This prevents runaway code and ensures bounded execution. 2. No dynamic memory allocation after initilization ↳ Say goodbye to malloc and free. Use pre-allocated memory only. This eliminates memory leaks, fragmentation, and unpredictable behavior. 3. Keep functions short ↳ No function should exceed 60 lines. Each function should be a single, logical unit that’s easy to understand and verify. 4. Assertion density: 2 per function ↳ Use assertions to catch anomalous conditions. They must be side-effect-free and trigger explicit recovery actions. This is your safety net for unexpected errors. 5. Declare data at the smallest scope ↳ Minimize variable scope to prevent misuse and simplify debugging. This enforces data hiding and reduces the risk of corruption. 6. Check all function returns and parameters ↳ Never ignore return values or skip parameter validation. This ensures error propagation and prevents silent failures. 7. Limit the preprocessor ↳ Use the preprocessor only for includes and simple macros. Avoid token pasting, recursion, and excessive conditional compilation. Keep your code clear and analyzable. 8. Restrict pointer use ↳ No more than one level of dereferencing. No function pointers. This reduces complexity and makes your code easier to analyze. 9. Compile with all warnings enabled ↳ Your code must be compiled with zero warnings in the most pedantic settings. Use static analyzers daily to catch issues early. Some of these rules can be seen as hard to follow, but you can't allow room for error when lives are at stake. Which ones are you still applying? #softwareengineering #systemdesign ~~~ 👉🏻 Join 46,001+ software engineers getting curated system design deep dives, trends, and tools (it's free): ➔ https://lnkd.in/dCuS8YAt ~~~ If you found this valuable: 👨🏼💻 Follow Alexandre Zajac 🔖 Bookmark this post for later ♻️ Repost to help someone in your network
-
When working with multiple LLM providers, managing prompts, and handling complex data flows — structure isn't a luxury, it's a necessity. A well-organized architecture enables: → Collaboration between ML engineers and developers → Rapid experimentation with reproducibility → Consistent error handling, rate limiting, and logging → Clear separation of configuration (YAML) and logic (code) 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗧𝗵𝗮𝘁 𝗗𝗿𝗶𝘃𝗲 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 It’s not just about folder layout — it’s how components interact and scale together: → Centralized configuration using YAML files → A dedicated prompt engineering module with templates and few-shot examples → Properly sandboxed model clients with standardized interfaces → Utilities for caching, observability, and structured logging → Modular handlers for managing API calls and workflows This setup can save teams countless hours in debugging, onboarding, and scaling real-world GenAI systems — whether you're building RAG pipelines, fine-tuning models, or developing agent-based architectures. → What’s your go-to project structure when working with LLMs or Generative AI systems? Let’s share ideas and learn from each other.
-
Last night, I was chatting in the hotel bar with a bunch of conference speakers at Goto-CPH about how evil PR-driven code reviews are (we were all in agreement), and Martin Fowler brought up an interesting point. The best time to review your code is when you use it. That is, continuous review is better than what amounts to a waterfall review phase. For one thing, the reviewer has a vested interest in assuring that the code they're about to use is high quality. Furthermore, you are reviewing the code in a real-world context, not in isolation, so you are better able to see if the code is suitable for its intended purpose. Continuous review, of course, also leads to a culture of continuous refactoring. You review everything you look at, and when you find issues, you fix them. My experience is that PR-driven reviews rarely find real bugs. They don't improve quality in ways that matter. They DO create bottlenecks, dependencies, and context-swap overhead, however, and all that pushes out delivery time and increases the cost of development with no balancing benefit. I will grant that two or more sets of eyes on the code leads to better code, but in my experience, the best time to do that is when the code is being written, not after the fact. Work in a pair, or better yet, a mob/ensemble. One of the teams at Hunter Industries, which mob/ensemble programs 100% of the time on 100% of the code, went a year and a half with no bugs reported against their code, with zero productivity hit. (Quite the contrary—they work very fast.) Bugs are so rare across all the teams, in fact, that they don't bother to track them. When a bug comes up, they fix it. Right then and there. If you're working in a regulatory environment, the Driver signs the code, and then any Navigator can sign off on the review, all as part of the commit/push process, so that's a non-issue. There's also a myth that it's best if the reviewer is not familiar with the code. I *really* don't buy that. An isolated reviewer doesn't understand the context. They don't know why design decisions were made. They have to waste a vast amount of time coming up to speed. They are also often not in a position to know whether the code will actually work. Consequently, they usually focus on trivia like formatting. That benefits nobody.
-
We brought down our API response time from 100+ ms to under 10 ms by making a simple perspective shift. It all started with a performance bug we couldn’t trace for weeks. We had built a system that worked well during testing. But under load, when we started handling millions of requests, it slowed to a crawl. Response times jumped from 50ms to over 300ms. And when you're operating at scale, even a few milliseconds can hurt user experience. We check everything, Query optimization, load balancers, autoscaling. Nothing helped consistently. Then one day, I sat down to look at our database access patterns. That’s when it hit me. We were updating records. A lot. Millions of times a day. Updates are the biggest resource hogs in a database. Especially when you're touching the same record over and over. They lock rows, consume I/O, and eat CPU cycles. So we flipped our mindset. Designed for inserts, not updates. We moved to an asynchronous model, using message queues like Kafka and RabbitMQ. Instead of blocking the API until everything is completed, we queued requests and processed them in the background. It slashed our API latency by 90%. We were over the moon. Because when you’re serving millions of users, saving a few milliseconds isn’t just a performance stat, it’s a competitive edge. Good code works. But great systems scale. #engineering #technology #coding
-
No, you won't be vibe coding your way to production. Not if you prioritise quality, safety, security, and long-term maintainability at scale. Recently coined by former OpenAI co-founder Andrej Karpathy, "vibe coding" describes an AI-coding approach where developers focus on iterative prompt refinement to generate desired output, with minimal concern for the LLM-generated code implementation. At Canva, our assessment — based on extensive and ongoing evaluation of AI coding assistants — is that these tools must be carefully supervised by skilled engineers, particularly for production tasks. Engineers need to guide, assess, correct, and ultimately own the output as if they had written every line themselves. Our experimentation consistently reveals errors in tool-generated code ranging from superficial (style inconsistencies) to dangerous (incorrect, insecure, or non-performant code). Our engineering culture is built on code ownership and peer review. Rather than challenging these principles, our adoption of AI coding assistants has reinforced their importance. We've implemented a strict "human in the loop" approach that maintains rigorous peer review and meaningful code ownership of AI-generated code. Vibe coding presents significant risks for production engineering: - Short-term: Introduction of defects and security vulnerabilities - Medium to long-term: Compromised maintainability, increased technical debt, and reduced system understandability From a cultural perspective, vibe coding directly undermines peer review processes. Generating vast amounts of code from single prompts effectively DoS attacks reviewers, overwhelming their capacity for meaningful assessment. Currently we see one narrow use case where vibe coding is exciting: spikes, proofs of concept, and prototypes. These are always throwaway code. LLM-assisted generation offers enormous value in rapidly testing and validating ideas with implementations we will ultimately discard. With rapidly expanding LLM capabilities and context windows, we continuously reassess our trust in LLM output. However, we maintain that skilled engineers play a critical role in guiding, assessing, and owning tool output as an immutable principle of sound software engineering.
-
Few Lessons from Deploying and Using LLMs in Production Deploying LLMs can feel like hiring a hyperactive genius intern—they dazzle users while potentially draining your API budget. Here are some insights I’ve gathered: 1. “Cheap” is a Lie You Tell Yourself: Cloud costs per call may seem low, but the overall expense of an LLM-based system can skyrocket. Fixes: - Cache repetitive queries: Users ask the same thing at least 100x/day - Gatekeep: Use cheap classifiers (BERT) to filter “easy” requests. Let LLMs handle only the complex 10% and your current systems handle the remaining 90%. - Quantize your models: Shrink LLMs to run on cheaper hardware without massive accuracy drops - Asynchronously build your caches — Pre-generate common responses before they’re requested or gracefully fail the first time a query comes and cache for the next time. 2. Guard Against Model Hallucinations: Sometimes, models express answers with such confidence that distinguishing fact from fiction becomes challenging, even for human reviewers. Fixes: - Use RAG - Just a fancy way of saying to provide your model the knowledge it requires in the prompt itself by querying some database based on semantic matches with the query. - Guardrails: Validate outputs using regex or cross-encoders to establish a clear decision boundary between the query and the LLM’s response. 3. The best LLM is often a discriminative model: You don’t always need a full LLM. Consider knowledge distillation: use a large LLM to label your data and then train a smaller, discriminative model that performs similarly at a much lower cost. 4. It's not about the model, it is about the data on which it is trained: A smaller LLM might struggle with specialized domain data—that’s normal. Fine-tune your model on your specific data set by starting with parameter-efficient methods (like LoRA or Adapters) and using synthetic data generation to bootstrap training. 5. Prompts are the new Features: Prompts are the new features in your system. Version them, run A/B tests, and continuously refine using online experiments. Consider bandit algorithms to automatically promote the best-performing variants. What do you think? Have I missed anything? I’d love to hear your “I survived LLM prod” stories in the comments!
-
Coding the correct optimized approach for the problem and keeping the code bug free is important, but what's also very important is making your code more understandable and readable. I mean all of us like reading well-written and clean pieces of code only, right? Here are a few tips that can help you make your code more elegant and readable- ✅ Use meaningful names: Choose descriptive names for variables, functions, classes so that it becomes easier for others to understand their purpose or meaning. ✅ Thoughtful comments: Wherever necessary, add comments to provide context or explain complex logic. Ideally, your code should be self-explanatory and excessive comments should be minimised. ✅ Proper indentation and formatting: This one is very important and I've seen interviewers emphasize on this one when they're assessing you. ✅ Reusable code: Remove duplicate code by creating reusable functions of using abstraction techniques. This reduces maintenance efforts and maintains consistency. ✅ Write modular code: Break your code into smaller, independent modules where each module should have a clear purpose and be responsible for a specific task. ✅ Avoid writing long code lines: Writing long lines of code makes it difficult to read moving back and forth horizontally. Use nesting and indentation to avoid this. Keep coding. All the best!❤️
-
High-quality code makes your work short-lived. Poorly written code ensures the company will always need your help. 😜 Funny — yet many people still follow this mindset. Here’s the hard truth: Across my career, from freshers to senior leaders, I’ve seen professionals who deliberately complicate work, avoid documentation, refuse to share knowledge, and quietly build a dependency around themselves. It’s not incompetence — it’s strategy. A strategy that slows teams down, breeds silos, and creates a dangerous single point of failure. And while it may offer short-term “job security,” it kills long-term team health, innovation, and trust. For leaders, these situations are the most challenging because the person often looks productive on the surface. But behind the scenes, the team becomes fragile, and delivery risks multiply. In engineering, we avoid single points of failure in systems. We should avoid them in people too. 💡 Hard-Hitting Tips for Leaders to Fix This 1️⃣ Make knowledge sharing non-negotiable Mandate documentation, code reviews, and walkthroughs. If knowledge lives only in someone’s head, that’s a risk — not a strength. 2️⃣ Remove dependency incentives Reward collaboration, not silo-building. Make team outcomes matter more than individual heroics. 3️⃣ Rotate responsibilities Let others touch the “critical” areas. If someone resists, that’s a red flag — not loyalty. 4️⃣ Build a culture where transparency is expected Open communication, shared ownership, and regular alignments reduce the power of hidden information. 5️⃣ Address the behaviour early Silence is approval. The longer you let it grow, the harder it becomes to fix. 6️⃣ Make it safe for others to speak Often the team knows who the blocker is — but they need psychological safety to raise concerns. 7️⃣ Lead by example Leaders who share knowledge freely create teams that do the same. Healthy teams grow when knowledge flows. Strong leaders rise when they dismantle silos. And real progress happens only when success is shared — not hoarded. #Leadership #TeamWork #EngineeringCulture #TechLeadership #TeamDynamics #OrgCulture #KnowledgeSharing #GrowthMindset #PeopleManagement #LeadershipTips #CriticalResource #SoftwareEngineering #MunnaPrawin #BUMI #SmartLife