There is a strange thing that I noticed as I progressed from a junior engineer to mid-career to senior. When you’re a new engineer, still learning the basics, you are given a task or you have an idea, and you proceed to implement exactly that idea. Very simple, point A to point B, whatever gets the job done. You hate your code later but you got it to work. Then you learn some stuff– you learn about object-oriented design & algorithms & design patterns & frameworks & abstractions & higher-order functions & monoids & whatever else you found on Hacker News. And you’ll start new projects, and you’ll think you should use some of those things, and you may notice that you don’t finish things as quickly, or not at all. Someday you find yourself on a team of engineers & it will seem like the whole team is getting less done than you got done on your own as a beginner who didn’t know anything.
The thing that gets lost at this stage as that, with a new task of any complexity, you still just need to code the shortest path from point A to point B. At this point it can help to explicitly ignore thinking about the best abstraction to use or The Right Way of Doing Things! That can come later. If it’s a true greenfield project you are “prototyping”, if it’s part of an existing project you are making a “tracer bullet”. The concept is the same– do the minimum work to get from point A to point B. Point A is the current state of the code, point B is code that fulfills the feature requirements. It might be ugly code, but if it fulfills the requirements you have succeeded. Do not factor your code too early!
This may seem obvious but it can get lost, especially when working on sizeable team– there is always someone suggesting to pause and spend days setting up a CI/CD pipeline, or use a cool new library they just found, or refactor some module so it’s more DRY/decoupled. These people are not strictly wrong, but they’re not completely right. Do the requirements dictate any of those things? Have you solved the hardest part of the task at hand? If not, don’t do that other crap.
The reason you should do things in this order, is that you might refactor the module or setup the CI environment, and then learn that your whole approach to the problem was wrong. You have to throw it out & start over & potentially a lot of those other things you did are now a waste of time. This is why work needs to be done holistically at the start– the pressure to keep people occupied & parallelize work will cause you to prematurely divide the problem along arbitrary boundaries. The proper abstractions that best delineate roles do not necessarily emerge on the first try, it’s better to be able to sculpt them into place over repeated encounters with the problem space. This is why you want to be on a small team that takes ownership of their entire piece of the problem space– the same people need to have repeated encounters with the problem in order for the right patterns to emerge.
Now this can lead to a problem: You have a crap piece of code on a feature branch that accomplishes exactly the minimum set of requirements at hand. Maybe you demoed it to Product. Can you ship it? No. Now you have to allocate time to refactor the ugly parts, write tests, etc. This will sculpt your sketchy prototype/tracer bullet into something shippable. This can be tricky to negotiate when interfacing with product management, the right approach depends on the relationship. Make your PM aware of the undiscovered parts of the problem, explain exactly what you’re trying to learn, or hide the task in some kind of opaque “prototyping” bucket. You can’t allow a product team with tight deadlines to kill or forever sweep under the rug essential aspects of SDLC– if it’s software that’s going to ship, it needs tests, it needs CI/CD, & you might need to refactor a few things so the code makes sense to future you. It probably shouldn’t land on
main without those things, but you shouldn’t be doing all of your development as tho it has to land on
Like everything else, this is a judgement call. It’s possible nothing about the problem at-hand is that complicated, you already know what the most logical division of labor is, so the boundaries you draw are not arbitrary. In that case you can parallelize, you can work prescriptively. But anytime you’re attempting a problem that is new, unexplored, if the probability of failure is greater than 20% or so, code the shortest path first.