TypeScript Meta-programming and Proxies
We all want our code to read like a sentence. There is a certain beauty in an API that just flows, where the intent is obvious and the boilerplate is non-existent.
In languages like Rust, we have powerful tools to achieve this. We have Macros. Macros allow us to write code that writes other code at compile time. It's meta-programming in its truest sense. You can define a derive macro and suddenly your struct has JSON serialization, database mapping, and a CLI interface, all without you writing a single extra line of logic.
// In Rust, this one line generates a ton of code for us
#[derive(Debug, Serialize, Deserialize)]
struct User {
id: u32,
name: String,
}
But we aren't in Rust. We are in TypeScript. And TypeScript, for all its structural typing glory, does not have macros. We can't hook into the compilation process to generate code on the fly (well, not easily). So when we want to build complex abstractions, we often find ourselves stuck between two bad options: writing verbose boilerplate or setting up fragile code-generation scripts.
But JavaScript gives us something else. It gives us a different kind of superpower. It gives us Proxies.
Enter the Proxy
If you haven't used a Proxy before, it's exactly what it sounds like. It's a wrapper around an object that intercepts operations. You can intercept a property read, a function call, an assignment—pretty much anything.
It allows for what I like to call "Runtime Meta-programming." Instead of generating code at compile time, we can define behavior for properties that don't even exist yet. We can make an object that replies "Yes!" to any question you ask it, or an array that logs every time you try to touch its elements.
Now, I can hear the performance purists sharpening their pitchforks. "Proxies are slow! Runtime interception is expensive!" And you're right. Compared to a raw property access, a Proxy is slower. But we have to ask: what are we optimizing for? In many cases, especially when dealing with I/O, network requests, or IPC (Inter-Process Communication), the bottleneck is not the property access. It's the operation itself. The microsecond cost of a Proxy is negligible compared to the millisecond latency of a network round-trip.
For the developer experience (DX) gains? It's often a trade worth making.
The Problem: A Magic RPC Client
I ran into this while building the RPC system for Werkbank. I wanted to communicate between the main thread and a Web Worker. Usually, this involves a lot of postMessage calls and event listeners. It's messy, it's verbose, and it breaks the flow we desire. Most importantly, it's not type-safe. I wanted an API that felt local. I wanted to be able to write this:
// I want this to call the worker, wait for the result, and return it.
const user = await client.users.getUser(123);
But I didn't want to manually write a getUser method on the client for every single function in my worker. That's the boilerplate trap. And I didn't want to run a script to generate a client file every time I changed the worker code. That's the code-gen trap.
I wanted it to just work. I wanted the client to magically know that client.users.getUser maps to a specific function in the worker, without me ever explicitly defining it.
The Solution: Recursive Proxies
This is where the Proxy shines. We can create a "Recursive Proxy"—a proxy that, when you access a property on it, returns another proxy.
Here is the core of the implementation from werkbank/src/rpc/client/proxy.ts. I've simplified it slightly for clarity, but the recursive structure is the real deal.
Note: I'm using RxJS here for handling the event streams because it makes filtering, mapping, timeouts, and cancellation easier.
export function createRpcProxy<Config>(
postMessage: PostMessage,
incoming$: Observable<Event<unknown>>,
): RpcClient<Config> {
// We keep a cache of proxies to avoid creating new ones for the same path repeatedly.
// This helps with memory and ensures object identity stability for the same path.
let proxies = new Map<string, any>();
function createProxyHandler(scope: Array<string> = []) {
let handler: ProxyHandler<any> = {
get(_target, prop) {
let key = prop.toString();
// Important: Don't proxy 'then' if you don't want the proxy itself to be awaitable.
// In our case, only the function call returns a Promise.
if (key === 'then') return undefined;
let proxyPath = [...scope, key].join(".");
// Return cached proxy if it exists
let currentProxy = proxies.get(proxyPath);
if (currentProxy) {
return currentProxy;
}
// This is the function that will eventually be called
// e.g. client.users.getUser(123)
let fn: ProxyFn = (...rawArgs: Array<unknown>) => {
let id = crypto.randomUUID(); // Use standard UUID generation
// We automatically scan arguments for Transferable objects (like ArrayBuffers)
// to ensure zero-copy transfer to the worker. This traverses the args recursively.
let { args, transfer } = extractTransferables(rawArgs);
// Send the request to the worker
postMessage(
REQUEST({
id,
args,
path: [...scope, key], // e.g. ['users', 'getUser']
}),
transfer,
);
// ... logic to wait for the response (we'll get to this) ...
}
// Return a new Proxy for the next level of the chain
// We wrap 'fn' so that if the user calls it, our 'fn' executes.
// If they access a property on it, the 'get' trap fires again.
let proxy = new Proxy(fn, createProxyHandler([...scope, key]));
proxies.set(proxyPath, proxy);
return proxy;
}
}
return handler;
}
// We cast the result to RpcClient<Config> to make TypeScript happy.
// The runtime object is just a Proxy, but we promise the compiler it behaves like Config.
return new Proxy({}, createProxyHandler([])) as RpcClient<Config>;
}
Let's break down what happens when we call client.users.getUser(123):
client.users: The first proxy intercepts thegetfor "users". It doesn't find it on the target object (which is empty), so it returns a new Proxy, remembering the path['users']..getUser: That new proxy intercepts thegetfor "getUser". It returns another new Proxy, now with the path['users', 'getUser'].(123): Finally, we call the function. The proxy wraps ourfn, so we can intercept the call. Inside that function, we take the accumulated path (['users', 'getUser']) and the arguments ([123]), and we fire off apostMessage.
Bridging the Gap: From postMessage to Promise
The trickiest part of any RPC system is mapping the asynchronous response back to the original request. postMessage is fire-and-forget. It returns void. But our client function needs to return a Promise.
We solve this with a unique ID and RxJS streams. One of the huge benefits of using RxJS here is how easily we can add robustness, like timeouts and cancellation.
// Inside the 'fn' above...
let response$ = incoming$.pipe(
// Only listen for a REPLY matching our specific request ID
filter((e) => REPLY.match(e) && e.payload.id === id),
take(1), // We only expect one response
// Add a timeout! If the worker doesn't reply in 5s, fail.
timeout(5000),
map((e) => {
if (e.payload.reject) {
throw e.payload.reject;
}
return e.payload.resolve;
}),
// If the consumer unsubscribes (e.g. React useEffect cleanup),
// we tell the worker to cancel the task.
finalize(() => {
postMessage(UNSUBSCRIBE({ id }), []);
})
);
// Convert the Observable to a Promise so the user can 'await' it
return firstValueFrom(response$);
This effectively pauses the execution on the main thread (via await) until the worker processes the task and sends back a message with the matching ID, or until the timeout fires.
But is it Type Safe?
I promised type safety. And this is where TypeScript's generics come in. Notice the <Config> generic in createRpcProxy<Config>.
We can define our worker's API as a standard TypeScript interface. Note that on the worker side, these functions might return direct values, but on the client side, we need them to be Promises.
We achieve this with a recursive mapped type. We use Awaited<R> to ensure we don't accidentally create Promise<Promise<T>> if the worker function is already async.
// Recursively traverse the type T.
// If it's a function, wrap the return type in a Promise.
// If it's an object, recurse into it.
type Promisify<T> = {
[K in keyof T]: T[K] extends (...args: infer A) => infer R
? (...args: A) => Promise<Awaited<R>>
: Promisify<T[K]>;
};
// Define the shape of your worker API
interface WorkerApi {
users: {
getUser(id: number): User; // Worker returns User synchronously
createUser(name: string): User;
};
}
// The RpcClient type magically wraps everything in Promises
const client = createRpcProxy<WorkerApi>(postMessage, incoming$);
// Now client.users.getUser returns Promise<User>
const user = await client.users.getUser(123);
Now, when you type client., TypeScript knows exactly what properties exist. It knows client.users exists. It knows client.users.getUser takes a number and returns a Promise<User>.
If you try to type client.users.deleteUser(123), TypeScript will yell at you. Even though the runtime proxy would happily accept that call and send a message to the worker (which would then fail), the compiler stops you before you even run the code.
Trade-offs
No solution is perfect, and Proxies are no exception. Here is what you are trading:
- Pros:
- Zero Boilerplate: No manual client definitions.
- Type Safety: Full TypeScript support with generics.
- Refactoring Safety: Renaming a method in the worker interface immediately flags errors in the client code.
- Flexibility: The API can evolve on the worker side without breaking the client (as long as types match).
- Cons:
- Runtime Overhead: Proxies are slower than direct property access (though usually negligible for RPC).
- Debugging:
console.log(client)just shows a Proxy object, which can be confusing. - Browser Support: Proxies cannot be polyfilled. They are supported in all modern browsers, but if you need to support very old environments, this won't work.
Conclusion
We might not have macros in TypeScript, but we have something that fits the dynamic nature of JavaScript perfectly. Proxies allow us to build "Magic" APIs—interfaces that adapt and respond to how we use them, rather than how they were defined. They bridge the gap between the static world of types and the dynamic world of runtime execution. And sometimes, that's even better than a macro.