Effective TypeScript Principles in 2025
Published: Mar 16, 2025
Last updated: Mar 16, 2025
Some guidelines for how I want to write TypeScript in 2025. Feel free to take it or leave it. Always remember that opinions have trade-offs and come from experiences that may have blinded me to better alternatives.
"First, your refactoring was not part of our negotiations nor our agreement so I must do nothing. And secondly, you must be a lead developer for changes to the developer's code to apply and you're not. And thirdly, the code is more what you'd call 'rules' than actual guidelines. Welcome aboard Corporate Development, Miss Turner" -- Barbossa, Project Lead, 2003.
We will be covering a few topics where each is ramping up into the next:
- The prerequisites
- Code volume
- Control flow
- State management
- The principles
- Composition over inheritance
- Parse, don't validate
- Never throw errors
- Metadata
- Define your source of truth
- Let controllers tell you everything
- Don't emulate network infrastructure
- Don't let AI take the driver's seat
- Generate as much code as possible
- Write to refactor programmatically
- Don't go overkill on abstraction layers
I won't be covering teamwork and only briefly touching upon inter-service communication at points.
The prerequisites
This section will briefly cover three elements of the development process that come from an older Enterprise Architecture Patterns course that I took back in 2021, but have found to been of constant importance.
These cornerstones have a direct correlation to complexity and brain overload. They are:
- Code volume.
- Control flow.
- State management.
Any mishandling of the following topics will lead development towards "developer purgatory" and, as a bonus side-effect, make future devs die inside.
Note that there won't be code examples for this topic.
Code volume
Code volume relates to the amount of code that you have to manage. In layperson spiel: less lines of code to manage is better.
I always find this concept to be at odds with Clean Code and Gang of Four design patterns. My reason for this is that Clean Code and GO4 patterns become the scapegoat to justify over-bloat and unnecessary abstractions, or premature abstractions that doesn't keep the necessary separation of concerns for the domain.
Over-bloat and unnecessary abstractions may be easier to imagine. If you're working in the codebase where you need to make jumps to ten different definitions in order to understand the inheritance chain or follow the path of the business logic, then you have probably over-engineered the shit out of it. Principles like "composition over inheritance" and "parse, don't validate" can help mitigate volume creep (which I touch on in their own section), but there are some general guiding principles that I recommend to get around this:
- Service layers that handle the business logic should be as "shallow" as possible. What I mean by this is that service layers should not compose other service layers, and layers that are required here should themselves aim to stay as shallow as possible.
- Functional core, imperative shell. The idea here is that the "deepest" layers invoked should be as pure as possible. Side-effects like logging should be the responsibility the the imperative shell (in our case, we aim for that to be the service layer or injected at the service layer). This also becomes a God-send for a controlled domain and range of tests for pure functions.
As for the premature abstractions, this one might be harder to understand at first. Dan Abramov's "Goodbye Clean Code" does a good job of outlining the core idea that combining similar interfaces to reduce the code volume is not always the best idea.
But what gives? Isn't the idea that less code is better? You need to understand when contradictions to this guideline are necessary. I'll touch later on approaches to mitigate this problem with code generation.
The important thing to remember here is that the code volume cornerstone applies to code that you have to manage as a developer, not the size of the codebase itself.
Control flow
Control flow speaks to the branches in logic. Choices in how you apply control flow, how much of it there is and where logic "branches" can make or break your day.
We'll be touching on this topic with the following sections:
- Let your controllers tell your everything.
- Don't go overkill on abstraction layers.
- Never throw errors.
- Metadata.
State management
State management refers to how applications track and maintain data that changes over time. This "state" includes things like user inputs, API responses, UI configurations, and application settings.
Mismanagement here can create some deeply-knotted problems:
- Unpredictable Behavior: When state can be modified from multiple places without clear patterns, applications become unpredictable. Developers can't easily reason about what will happen when code executes.
- Debugging Nightmares: Without clear state flows, finding the root cause of bugs becomes extremely difficult. A bug might manifest in one component but originate from state modifications elsewhere.
- Technical Debt Accumulation: Poor state management compounds over time through things like duplicated state, stale state and side-effects.
- Readability and Maintainability Issues: New developers struggle to understand applications.
- Performance Problems: Unnecessary re-renders, Memory leaks, Network request redundancy.
Although this post won't spend too much time on state management, it is also partly related to these topics:
- Parse, don't validate.
- Never throw errors.
- Metadata.
- Define your source of truth.
The principles
With the foundation prerequisites out of the way, the rest of the section will speak to some examples scenarios. You should keep those three cornerstones in mind while reading through each guideline.
Composition over inheritance
Composition over inheritance is a design principle that suggests you should prefer building complex functionality by combining simpler objects (composition) rather than through class inheritance hierarchies. Key Benefits of Composition
Some of the benefits:
- Easier to change behavior at runtime
- Looser coupling between components
- Simpler to test isolated components
Don't:
abstract class Pizza { prepare() { console.log("Preparing something hidden away from concrete classes"); } // This method needs to be implemented by subclasses abstract addToppings(): void; } // Concrete subclasses class PineapplePizza extends Pizza { addToppings(): void { console.log("Adding pineapple chunks"); } } const p = new PineapplePizza(); p.prepare(); p.addToppings();
Do:
interface Pizza { prepare(): void; addToppings(): void; } class PineapplePizza implements Pizza { prepare() { console.log("Preparing pineable pizza"); } addToppings(): void { console.log("Adding pineapple chunks"); } } const p = new PineapplePizza(); p.prepare(); p.addToppings();
One thing about this approach: use composition over inheritance for enforcing behaviors.
When it comes to properties on the class, don't rely on implementation an interface to enforce that property. Implementing interfaces comes with the caveat that you cannot implement non-public properties. In those scenarios, use dependency injection.
Don't:
interface Pizza { id: string; name: string; prepare(): void; addToppings(): void; } class PineapplePizza implements Pizza { // Can't be private public id: string; public name: string; constructor(id: string, name: string) { this.id = id; this.name = name; } prepare() { console.log("Preparing pineable pizza"); } addToppings(): void { console.log("Adding pineapple chunks"); } } const p = new PineapplePizza("1", "pineapple");
Do:
interface Pizza { prepare(): void; addToppings(): void; } interface PizzaProps { id: string; name: string; } class PineapplePizza implements Pizza { private id: string; private name: string; constructor(props: PizzaProps) { this.id = props.id; this.name = props.name; } prepare() { console.log("Preparing pineable pizza"); } addToppings(): void { console.log("Adding pineapple chunks"); } } const p = new PineapplePizza({ id: "1", name: "pineapple" });
As a natural extension to this when it come to interfaces, avoid extending interfaces with other interfaces.
Parse, don't validate
Popularized by this article by the same name by Alexis King.
The core idea behind "parse, don't validate" is to transform untyped or less-typed data into well-typed data early in your program flow, rather than checking validity throughout your code. With this approach:
- You parse input data once, at the boundary of your system.
- This parsing step both validates and transforms the data.
- After parsing, you work with fully-typed, validated data throughout the rest of your code.
**Boundary** refers to both entry points and exit points for the system. It's at the "edge" of how data enters the system (think of things like from requests to your server and responses from your server's requests).
You rely on the parser to take unknown data into known data, and in regards to TypeScript, that known type then improves our confidence vector for the rest of the code as we can give the remaining heavy lifting to static analysis with the type checker.
Keeping the validation at the boundary also improves develop experience ensuring that validation logic isn't littered throughout our control flow, which helps to keep our control flow cornerstone healthy.
Don't:
interface Request { body: unknown; } class Service { action(body: unknown) { // First check if body is a non-null object if (body !== null && typeof body === "object") { // Now TypeScript knows body is an object, so we can use 'in' operator if ("name" in body) { // Do something with body.name // Note: At this point, TypeScript knows body has a 'name' property, // but doesn't know its type. If needed, we could add further type narrowing: const typedBody = body as { name: string }; // Now we can use typedBody.name as a string } } } } class Controller { private service: Service; constructor(service: Service) { this.service = service; } get(req: Request) { this.service.action(req.body); // ... omitted } }
Here, validation happens away from the boundary and we are passing unknown
data types around.
Do:
import { z } from "zod"; interface Request { body: unknown; } // Define the schema for our request body const UserSchema = z.object({ name: z.string(), email: z.string().email(), age: z.number().int().positive().optional(), }); // Derive TypeScript type from the schema type User = z.infer<typeof UserSchema>; class Service { action(user: User) { // No need to validate } } class Controller { private service: Service; constructor(service: Service) { this.service = service; } get(req: Request) { // Parse and validate at the boundary const userResult = UserSchema.safeParse(req.body); if (!userResult.success) { // Handle error case return; } // Pass the parsed and typed data to the service this.service.action(userResult.data); // ... omitted } }
In the example above, we are using Zod to demonstrate an example of parsing into a well-known type.
We also could have done this in the "Service" class for the **don't** example, but that still would have violated this guideline by happening away from the boundary.
Never throw errors
Errors fall into two categories: expected and unexpected.
In the scenario of an expected error, we want to take control of how we respond to that error and keep our system fault tolerant. In a scenario where a downstream server may be unavailable, a particular response like a 504 Gateway Timeout may knowingly want to be handled with retries, while for 400 errors we knowingly want to respond to our own request straight away with actionable details instead of retries. All of these scenarios fall under the domain that I considered expected errors.
In contrast, errors that are unexpected as far as the application is concerned fall into the unexpected errors, also known as defects. In these cases, an exception is generally raised and managed outside of the scope of the controller (for example, this could be catch-all middleware).
To elaborate a bit more, "unexpected as far as the application is concerned" does imply that it's a possible error scenario that you may expect to happen, but haven't implemented anything to handle it and don't mind that it's caught and tracked by a catch-all implementation. As for the "catch-all" middleware example, I will elaborate more on that in the "Let controllers tell you everything" section.
For more on this, I find it best summarized by [the EffectTS article "Two Types of Errors"](https://effect.website/docs/error-management/two-error-types).
An example of this in action with the neverthrow library.
Don't:
class Data { readonly _tag = "Data"; private myData: string; constructor(myData: string) { this.myData = myData; } } class Service { create() { // stand-in const isErr = false; if (isErr) { throw new Error("ErrorOne"); } if (isErr) { throw new Error("ErrorTwo"); } // Stand-in for applying actual work return new Data("Yay"); } } class Controller { private service: Service; constructor(service: Service) { this.service = service; } create() { try { const createResult = this.service.create(); return { status: 200, message: "Success", }; } catch (err) { // Some catch error handling // ... omitted // If we can't match the expected cases here, throw to the catch-all handler. throw err; } } }
In the above case, we are throwing errors as stand-ins for what could be handled as expected errors. A non-exhaustive list of problems with this:
- If we want to catch and run business logic on this, we need to do so at the controller. In cases that don't match, if we don't apply our observability tools here, then we need to re-throw. This creates some havoc on our control flow cornerstone and may also bloat our error logs and tools with known errors that we didn't handle because static analysis couldn't help us.
- Throwing errors aren't visible in the types. If we use our Intellisense on the
Service.create
method, all we see if that the return type isData
. A developer cannot grok from our types what can go wrong in an expected way.
In my experience as well, this approach also doesn't really happen in practice. Not all thrown errors are caught and managed correctly, so you end up with hard-to-follow try-catch behavior littered throughout implementation.
Do:
import { ok, err } from "neverthrow"; class ExpectedErrorOne { readonly _tag = "ExpectedErrorOne"; } class ExpectedErrorTwo { readonly _tag = "ExpectedErrorTwo"; } class Data { readonly _tag = "Data"; private myData: string; constructor(myData: string) { this.myData = myData; } } class Service { create() { // Stand-in const isErr = false; if (isErr) { return err(new ExpectedErrorOne()); } if (isErr) { return err(new ExpectedErrorTwo()); } // Stand-in for applying actual work return ok(new Data("Yay")); } } class Controller { private service: Service; constructor(service: Service) { this.service = service; } create() { const createResult = this.service.create(); return createResult.match( // Success case (data) => ({ status: 200, message: "Success", }), // Error case - pattern match on _tag (error) => { switch (error._tag) { case "ExpectedErrorOne": return { status: 400, message: "Bad request", }; case "ExpectedErrorTwo": return { status: 422, message: "Unprocessable entity", }; default: // Optional Exhaustiveness check - will error if we add a new response type // and forget to handle it. const _exhaustiveCheck: never = error; } } ); } }
In the above, I've made use of the neverthrow library to demonstrate an approach of returning known errors.
Some of the benefits of this:
- The return type is known typed to know what can go wrong
const createResult: Err<never, ExpectedErrorOne> | Err<never, ExpectedErrorTwo> | Ok<Data, never>
. - There are no try-catch clauses. In the case where an error is thrown from something unexpected, we consider this a defect and should have systems in place to capture that error and inform the developers (not shown here).
- Our controller can have an easier time managing responses at the boundary, while our developers working on this can learn a lot about this endpoint and possible responses without diving into the business logic.
If you look at the `Data` and expected error classes, you'll known the `_tag` property (which I've adopted from EffectTS). I'll talk more to this on the metadata section.
I should finish here by saying that "never" here is a bit strong. I've recently heard an engineering manager use the quote "use exceptions for exceptional circumstances", and I find that to be a useful quote around throwing errors in TypeScript. Do so sparingly and with good reason.
Metadata
There should be an easy way to delineate data which is important for internal affairs.
As mentioned in the previous section, I use the underscore prefix _
as an easy for way for me to delineate between properties important for internal logic, or represent data that has only has importance to internal debugging and observability tools.
I opted for this style of delineation after spending some time working with EffectTS, but you certainly do not have to follow my conventions. The important part is that there are conventions and those are not easily confused with other data.
Don't:
class InvalidDatabaseObject { readonly tag = "InvalidDatabaseObject"; readonly errorId = 123; readonly interalMessage = "To resolve, check a...b...c..." readonly statusCode = 422; readonly message = "We are unable to process your request at this time"; } class Controller() { create() { // Assume this created object was the response const result = new InvalidDatabaseObject(); if (result.tag === "InvalidDatabaseObject") { // Implement any observability if we need based on error properties logger.error("Failed to do something", { tag: result.tag, errorId: result.errorId, internalMessage: result.interalMessage, }) } return { tag: result.tag, status: result.statusCode, message: result.message } } }
In the above, it's hard for future developers to understand which properties are used for internal affairs.
Do:
class InvalidDatabaseObject { readonly _tag = "InvalidDatabaseObject"; readonly _errorId = 123; readonly _interalMessage = "To resolve, check a...b...c..." readonly statusCode = 422; readonly message = "We are unable to process your request at this time"; } class Controller() { create() { // Assume this created object was the response const result = new InvalidDatabaseObject(); if (result._tag === "InvalidDatabaseObject") { // Implement any observability if we need based on error properties logger.error("Failed to do something", { _tag: result._tag, _errorId: result._errorId, _internalMessage: result._interalMessage, }) } return { _tag: result._tag, status: result.statusCode, message: result.message } } }
In the above code, we can note visually delineate what is useful for internal development and which belongs to things like response objects. Note that we also return metadata with the response with _tag
. This enables other systems to make use of the same unified control flow. For client applications, this enables the ability to be able to customize behavior and messages beyond simple status codes.
I normally take this approach in conjunction with the control flow helpers we've explored for the two types of errors, but let me reiterate that the important part here is that you have consistent delineation defined within the engineering cohort. It may be more approach to separate responses and objects into something like metadata
and data
properties on the object (similar to how AWS does this) or something else completely different.
Please note that if you nail this stuff down, it also becomes a miracle for micro-services. Whenever you write decouple, asynchronously processed code that has an internal, well-defined rule set, you'll find that handling these responses across services, clients and communication lines like websockets becomes increasingly nicer to reason about and manage.
Define your source of truth
If your types come from remote sources, don't define TypeScript types and interfaces and set this as your source of truth. It can and will fall out of sync.
Don't:
interface User { name: string; } function getUser(): User { const response: { data: unknown } = await getUserFromRemote(); return response.data as unknown as User; }
Do:
import { z } from "zod"; const UserSchema = z.object({ name: z.string(), }); type User = z.infer<typeof UserSchema>; function getUser(): User { const response: { data: unknown } = await getUserFromRemote(); const userResult = UserSchema.parse(response.data); return userResult; }
Secondly, don't mix up representations for different concerns. For example, if you are writing schemas to represent the query params, have one for the stringified version and another to represent the coercion. This is one of those times were code volume will increase, but also opens you up the better generate that code and not have to manage it in the first place.
Don't:
const getUsersQueryParams = z .object({ name: z.string(), age: z.coerce.number(), }) .partial();
Do:
const getUserQueryParams = z .object({ name: z.string(), age: z.string(), }) .partial(); const getUserCoercedQueryParams = z .object({ name: z.string(), age: z.coerce.number(), }) .partial(); // In use const result = getUserQueryParams .pipe(getUserCoercedQueryParams) .parse({ name: "Daniel", age: "22" });
This certainly violates the code volume principle, but it is surprising I've required these values to have separation for different purposes. This do/don't falls under the category of "code generation" and well-defined schemas will make generating this separation very easy.
Let controllers tell you everything
I've already touched on this principle with "never throw errors", but I will just re-iterate the guidelines for this one:
- Controllers should control all the request received/response flows.
- As a developer, from the controller code, you should be able to see and understand route-specific middleware applied, expected incoming data (params, body, headers) and all possible responses (with the possible exception of defects if you handling that at a catch-all middleware handler).
Don't:
class Data { readonly _tag = "Data"; private myData: string; constructor(myData: string) { this.myData = myData; } } class Service { create() { // stand-in const isErr = false; if (isErr) { throw new Error("ErrorOne"); } if (isErr) { throw new Error("ErrorTwo"); } // Stand-in for applying actual work return new Data("Yay"); } } class Controller { private service: Service; constructor(service: Service) { this.service = service; } create() { const createResult = this.service.create(); return { status: 200, message: "Success", }; // NO ERROR HANDLING, all sent to the catch-all. BAD. } }
Do:
import { ok, err } from "neverthrow"; class ExpectedErrorOne { readonly _tag = "ExpectedErrorOne"; } class ExpectedErrorTwo { readonly _tag = "ExpectedErrorTwo"; } class Data { readonly _tag = "Data"; private myData: string; constructor(myData: string) { this.myData = myData; } } class Service { create() { // Stand-in const isErr = false; if (isErr) { return err(new ExpectedErrorOne()); } if (isErr) { return err(new ExpectedErrorTwo()); } // Stand-in for applying actual work return ok(new Data("Yay")); } } class Controller { private service: Service; constructor(service: Service) { this.service = service; } create() { const createResult = this.service.create(); return createResult.match( // Success case (data) => ({ status: 200, message: "Success", }), // Error case - pattern match on _tag (error) => { switch (error._tag) { case "ExpectedErrorOne": return { status: 400, message: "Bad request", }; case "ExpectedErrorTwo": return { status: 422, message: "Unprocessable entity", }; default: // Optional Exhaustiveness check - will error if we add a new response type // and forget to handle it. const _exhaustiveCheck: never = error; } } ); } }
I actually haven't included the example to properly demonstrate this. It's not inclusive of handling incoming request information. I will update this when I get the chance with something from one of my actual projects.
In regards to the catch-all middleware handler, do not use that as a point to add control flow for different error handling responses outside of the controller.
Don't:
import { Hono } from "hono"; import { HTTPException } from "hono/http-exception"; const app = new Hono(); // Error handling middleware app.use("*", async (c, next) => { try { await next(); } catch (error) { console.error("Error caught:", error); // Handle Hono's built-in HTTPException if (error instanceof HTTPException) { return error.getResponse(); } // Custom error with status code if ("status" in error && typeof error.status === "number") { return c.json( { message: error.message || "An error occurred", code: error.code || "UNKNOWN_ERROR", }, error.status ); } // Authentication errors if ( error.name === "AuthenticationError" || error.code === "UNAUTHENTICATED" ) { return c.json( { message: "Authentication failed", details: error.message, }, 401 ); } // Validation errors with details if (error.name === "ValidationError" || "validationErrors" in error) { return c.json( { message: "Validation failed", errors: error.validationErrors || error.details || [error.message], }, 400 ); } // Database errors if ("code" in error && error.code?.startsWith("DB_")) { // Log database errors but don't expose details to client console.error("Database error:", error); return c.json({ message: "Database operation failed" }, 500); } // Default case - generic 500 error return c.json({ message: "Internal server error" }, 500); } }); export default app;
Do:
import { Hono } from "hono"; import { HTTPException } from "hono/http-exception"; const app = new Hono(); // Error handling middleware app.use("*", async (c, next) => { try { await next(); } catch (error) { // Handle observability here. Log errors, capture with third-party tools like Sentry etc. // Default case - generic 500 error return c.json({ message: "Internal server error" }, 500); } }); export default app;
In this example, our catch-all is purely for handling defects and recovering with valid responses to the user. Use the catch-all with no control flow and only as a mechanism to notify the developers.
Conversations with AI
Don't rely on AI to give you the best developer experience. While these tools are getting better, without guidelines, they have overcomplicated and over-bloated their answers for queries that I've made before.
AI is great to chat shop with and pass some ideas by, but don't let it take control of your codebase (at least not as it is at the time of writing). For any suggestions that you adopt, scrutinize them and do your own investigations and spikes into it.
Generate as much code as possible.
Such an underrated time-saver. If there are explicit patterns that are repetitive that you can identity, spend the time to write a script to run the generation. Most of the time for myself, I end up git ignoring these files and generating them for local development and as part of the CI pipeline.
An example of this normally comes with generating SDKs for code bases with OpenAPI specs. Thanks to the spec definition, you can relying write code to parse this information and whatever you need.
This is also a place where AI with guardrails can be super useful, but do take care with this. I wouldn't make use of AI generation as part of a pipeline, so I would be committing that code (and likely not running the AI script much).
I also like to use template libraries here personally. It acts as both a way to easily generate code and a guideline for anyone joining the codebase. Whenever I change repetitive standards, it's normally the templates that I am confirming my ideas against.
Write to refactor programmatically
The more you contain the complexity in your managed code, the easier it is to write codemods for when standards change.
Codemods enable you to programmatically change the code within your files. Personally, I find ts-morph the most accessible option, but there are plenty of great AST parsers and codemod tools out there.
As a bonus, if you like "code-as-documentation" in regards to generating diagrams and scripts, then these same tools can be your friend. Just a fair warning that the more complicated your code is to programmatically traverse, the less these things become an option.
Don't go overkill on abstraction layers
"All problems in computer science can be solved by another level of indirection" -- The "fundamental theorem of software engineering".
Some guidelines:
- Sparingly use abstract classes and inheritance.
- Sparingly extend interfaces. The same rules can apply here for composition over inheritance.
- When using generics, do whatever it takes to have generics infer the types instead of supplying them where possible.
- Invert control back to the invocation layer where possible.
- Use dependency injection.
- Keep your internal codebase "shallow".
For the example around interfaces, we can demonstrate what I mean with this very contrived useless code:
Don't:
interface Retryable { redrive: () => void; } interface Queue extends Retryable { process: () => Promise<void>; } class DataQueue implements Queue { private dlq: string[] = []; private q: string[] = []; redrive() { this.q = [...this.q, ...this.dlq]; this.dlq = []; } async process() { await Promise.all(this.q.map((el) => console.log)); } // Rest omitted }
In this example, we have Queue
now extending for Retryable
, instead of composing them together.
Do:
interface Retryable { redrive: () => void; } interface Queue { process: () => Promise<void>; } class DataQueue implements Retryable, Queue { private dlq: string[] = []; private q: string[] = []; redrive() { this.q = [...this.q, ...this.dlq]; this.dlq = []; } async process() { await Promise.all(this.q.map((el) => console.log)); } // Rest omitted }
In the above, there is a separation of concerns for the interfaces, and understanding the definition of redrive
doesn't require us to jump through to inheritance.
"All problems in computer science can be solved by another level of indirection. Except for the problem of indirection."
Conclusion
This wraps up my opinionated list of things. At least this first iteration. I've been sitting here typing away for close to four hours now, so the quality may be diminishing returns.
I want to finish with a couple of "don't forgets":
- These are guidelines. They serve me well currently, but may be out-of-date by the end of the year. If you don't agree or have a strong opinion on any of them, I would welcome your critiques.
- Even the best developers have moments where they commit cardinal sins with their code. It's a team effort for this to work, so ensure that your design patterns and ideas are reviewed by your team members as they will always have better feedback on your own code than you do.
- This is purely about my more recent experiences with TypeScript codebases. What I am speaking to does not apply to other languages that make trade-offs that actively want you to contradict these guidelines.
If I get another moment where I feel like furiously typing away on the keyboard, I would like to talk about team processes, microservices and/or release management.
Photo credit: martinsanchez
Effective TypeScript Principles in 2025
Introduction