<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
<title>Martin Masevski Articles</title>
<link>https://martinmasevski.dev/</link>
<description>Latest frontend engineering articles by Martin Masevski</description>
<language>en</language>
<lastBuildDate>Mon, 11 May 2026 12:10:58 GMT</lastBuildDate>
<atom:link href="https://martinmasevski.dev/feed.xml" rel="self" type="application/rss+xml" />
<item>
<title>Building a local-first browser TTS studio with Kokoro</title>
<link>https://martinmasevski.dev/blog/building-local-first-browser-tts-studio-kokoro</link>
<guid isPermaLink="true">https://martinmasevski.dev/blog/building-local-first-browser-tts-studio-kokoro</guid>
<pubDate>Sat, 28 Mar 2026 00:00:00 GMT</pubDate>
<author>Martin Masevski</author>
<description>How I built LocalVoice Studio to generate speech in the browser, and what AI-assisted development still needed to make it shippable.</description>
<category>ai</category><category>tts</category><category>local-first</category><category>accessibility</category><category>testing</category>
<content:encoded><![CDATA[<p><img src="https://martinmasevski.dev/images/blog/building-local-first-browser-tts-studio-kokoro.png" alt="A cyberpunk illustration of Martin in a neon-lit recording studio operating a futuristic audio console. Glowing soundwaves flow from a &apos;Kokoro ONNX&apos; hopper, surrounded by Vue.js holograms and an &apos;AI Speed&apos; rocket anchored by chains labeled &apos;Linting&apos; and &apos;Accessibility&apos;"></p><pre>I wanted audio versions of my blog posts. Just a play button for an article, with speech that sounded natural enough that people would actually use it. I could have solved that by paying for a hosted text-to-speech (TTS) service and moving on, but I wanted more control than that.

## Hosted TTS was the baseline, not the answer

I tried [ElevenLabs](https://elevenlabs.io/) first, and to be fair, it worked. The output quality was good, setup was easy, and it immediately proved that article audio was worth doing.

It still felt wrong for what I actually needed.

I did not want a big platform with a long feature list, pricing I had to keep thinking about, and my content flowing through someone else&apos;s product just to narrate a few articles. My use case was much smaller and much more specific: take text I already own, turn it into audio, keep the workflow simple, and keep the whole thing under my control.

That pushed me toward open models. I wanted something I could inspect, adapt, and run locally. Once I found the [Kokoro ONNX model on Hugging Face](https://huggingface.co/onnx-community/Kokoro-82M-v1.0-ONNX), the project stopped looking like a vague idea and started looking like an engineering problem I actually wanted to solve.

## From Python proof of concept to LocalVoice Studio

I started with the smallest possible test: a Python script in the terminal. Paste in some text, run synthesis, listen back, repeat. That first proof of concept was rough, but the voice quality was good enough that I immediately stopped thinking in terms of &quot;can this work?&quot; and started thinking in terms of &quot;how far can I push this in the browser?&quot;

That turned into [LocalVoice Studio](https://localvoice-studio.netlify.app), a static frontend app built with Vue, TypeScript, ONNX Runtime Web, and `kokoro-js`. No backend. No cloud inference. No telemetry. Speech generation happens in the browser and, after the initial model download, the app is meant to keep working offline too. That distinction matters. &quot;Local-first&quot; is easy to say, but I wanted to be clear about what it means in practice. The model download is a one-time cost, but the app&apos;s behavior after that is what defines the experience.

At that point the goal changed a little. I was no longer building a thin article player. I was building a small studio for producing article audio. It was about being able to give a personal touch to the audio and play with the voice settings. I wanted to make it feel like a creative tool, not just a synthesis utility.

## The difference between a demo and a tool

The model itself was only part of the work. The harder part was making browser TTS feel stable enough that I would trust it for real writing.

Inference runs in a dedicated worker because I had no interest in freezing the UI every time the model initialized or generated audio. The app prefers WebGPU when the browser can support it, but it can retry on WASM if GPU startup fails. That fallback path was one of the most important decisions in the project. Browser AI is full of features that look impressive on one machine and fall apart on the next. If runtime selection is brittle, the app is brittle.

I also did not want voice settings to feel like blind guesses. So the app generates previews, keeps local history, stores presets in the browser, supports voice blending, and lets you control pronunciation more directly with inline markup for pauses, stress, and phoneme overrides. That is the part I enjoyed most as a web developer. Once the raw speech quality is good enough, the real product work shifts to feedback loops. Can I preview a change quickly? Can I recover a setting I liked? Can I export something useful without extra cleanup?
To me, delivering a good user experience, is more important than adding &quot;AI-powered&quot; in the landing page.

That is also why I kept the output practical. You can generate speech, play it back, download it as WAV, clear caches, and keep moving. It needed to feel like a local tool I could come back to, not a one-shot demo I would show once and forget.

## AI made it fast - guardrails made it shippable

This project was also an experiment in AI-assisted development. I used Codex in VS Code, Gemini through Antigravity, and the usual modern tooling around a Vue codebase. The speed is real. You can move from rough idea to working interface much faster than I could have a year ago.

The part people keep underselling is how quickly that speed turns into drift if you do not put boundaries around it.

I had to be explicit about the shape of the project: static frontend only, worker-driven inference, typed state, clear constraints around privacy, and no backend escape hatches. I also had to keep the boring guardrails in place from the start: linting, formatting, Vitest, Playwright, accessibility checks, and runtime recovery when WebGPU failed. That was the QA lead part of my brain taking over. I do not really care that an agent can scaffold ten components in a minute if the result becomes impossible to trust once real users touch it.

AI is great at accelerating implementation inside a box. It is much worse at deciding where the box should be, how strict it needs to stay, and what quality bar the project has to clear before it deserves to ship. That part is still on us.

## What I took away

I did not build this because I think every problem needs a local AI app. I built it because this one did. Article audio is a narrow problem, and a browser-first, open-source TTS tool turned out to be a much better fit for it than another subscription.

The bigger lesson for me is that local-first AI gets interesting when you stop treating the model as the product. The product is everything around it: fallback behavior, caching, previews, export, tests, and enough constraints that the code does not dissolve into vibes halfway through the build.

If you want to study or reuse the approach, the [repo is here](https://github.com/Archetipo95/localvoice-studio). If you are building something similar, I would start with the smallest proof of concept you can get running, then spend more time than you think on guardrails. The demo is the easy part. Making it hold up is the work.</pre>]]></content:encoded>
</item>
<item>
<title>Debugging remote timezone issues with Chrome DevTools MCP</title>
<link>https://martinmasevski.dev/blog/debugging-timezone-chrome-devtools-mcp</link>
<guid isPermaLink="true">https://martinmasevski.dev/blog/debugging-timezone-chrome-devtools-mcp</guid>
<pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate>
<author>Martin Masevski</author>
<description>How a timezone bug turned a calendar date into a wrong day, and how Chrome DevTools MCP helped debug it without leaving VS Code.</description>
<category>mcp</category><category>debugging</category><category>ai</category><category>devtools</category><category>vitest</category>
<content:encoded><![CDATA[<p><img src="https://martinmasevski.dev/images/blog/debugging-timezone-chrome-devtools-mcp.png" alt="An editorial cartoon of a developer caught between two giant glowing clocks showing different times, surrounded by neon-lit debugging tools and timezone labels."></p><pre>A US-based client opened a ticket: the travel itinerary showed **February 26th** instead of **February 27th** for a Rome-to-Milan transfer. One day off. In travel, that&apos;s a stranded passenger.

I debugged the whole thing from VS Code using [Chrome DevTools MCP](https://github.com/ChromeDevTools/chrome-devtools-mcp), which lets an AI agent control Chrome&apos;s debugging tools (network inspection, console, DOM) through the [Model Context Protocol](https://modelcontextprotocol.io/). Instead of manually toggling Chrome&apos;s timezone settings and reloading four times, I had the agent run timezone simulations in the browser console, inspect the network payload, and verify the fix. One editor session, start to finish.

---

## The bug

The client in New York saw this:

| Field    | Expected          | What they saw     |
| -------- | ----------------- | ----------------- |
| Pick up  | 27/02/2026, Rome  | 26/02/2026, Rome  |
| Drop off | 27/02/2026, Milan | 26/02/2026, Milan |

The dates were exactly one day off, both of them. Not random corruption, not a calculation error. A one-day shift that&apos;s consistent across fields smells like a timezone conversion that shouldn&apos;t be there. From my machine in Rome (UTC+1), everything looked correct, which made it worse. I needed to see what the client was actually seeing.

---

## Reproducing the bug with MCP

The traditional approach to timezone bugs: open Chrome DevTools, change the timezone in sensor overrides, reload, inspect. Repeat for each timezone. Slow, tedious, easy to mess up.

Instead, I had the MCP agent run a simulation directly in the browser console, testing how the frontend&apos;s date code behaves across four timezones at once:

```typescript
const serverDate = &quot;2026-02-27&quot;
const parsed = new Date(serverDate)

const timezones = [&quot;Europe/Rome&quot;, &quot;America/New_York&quot;, &quot;America/Los_Angeles&quot;, &quot;Asia/Tokyo&quot;]

const results = {}
timezones.forEach((tz) =&gt; {
  results[tz] = parsed.toLocaleDateString(&quot;it-IT&quot;, { timeZone: tz })
})
```

| Timezone            | Displayed date | Correct? |
| ------------------- | -------------- | -------- |
| Europe/Rome         | 27/02/2026     | ✓        |
| America/New_York    | 26/02/2026     | ✗        |
| America/Los_Angeles | 26/02/2026     | ✗        |
| Asia/Tokyo          | 27/02/2026     | ✓        |

Bug confirmed. Any negative UTC offset shows the previous day.

## Tracing the data

MCP&apos;s network inspection showed what the server was sending:

```json
{
  &quot;start_date&quot;: &quot;2026-02-27&quot;,
  &quot;end_date&quot;: &quot;2026-02-27&quot;,
  &quot;origin&quot;: { &quot;name&quot;: &quot;Rome&quot; },
  &quot;destination&quot;: { &quot;name&quot;: &quot;Milan&quot; }
}
```

Plain `YYYY-MM-DD` strings. No time component, no timezone offset. Just a calendar date. The server was fine.

## The root cause

The Vue component that renders the date:

```typescript
const formattedPickupDateTime = computed(() =&gt; {
  const date = new Date(props.pickupDate)
  const formatted = date.toLocaleDateString(&quot;it-IT&quot;)
  return { date: formatted, time }
})
```

There it is. `new Date(&apos;2026-02-27&apos;)` parses the string as UTC midnight. Then `toLocaleDateString` converts to the user&apos;s local timezone.

In Rome (UTC+1): midnight UTC becomes 1:00 AM on the 27th. Correct.
In New York (UTC-5): midnight UTC becomes 7:00 PM on the 26th. Wrong.

The `Date` constructor was the trap. The string `&quot;2026-02-27&quot;` looks harmless, but once it enters JavaScript&apos;s Date system, it picks up timezone semantics nobody intended.

---

## The fix

The server sends calendar dates: a day on a calendar, not a moment in time. The fix was to stop treating them as moments:

```typescript
function dateToString(dateString: string): string {
  if (!dateString) return &quot;&quot;
  const cleanDateString = dateString.substring(0, 10)
  const [year, month, day] = cleanDateString.split(&quot;-&quot;)
  return `${day}/${month}/${year}`
}
```

Updated component:

```typescript
const formattedPickupDateTime = computed(() =&gt; {
  const date = dateToString(props.pickupDate)
  return { date, time }
})
```

No `new Date()`. The string `&quot;2026-02-27&quot;` gets split on hyphens and rearranged into `&quot;27/02/2026&quot;`. Same output in Tokyo, Rome, and New York.

After deploying, I used MCP again: the same simulation that found the bug now confirmed the fix. Four timezones, all showing 27/02/2026, no Chrome settings changed.

---

## Locking it down with tests

If anyone refactors `dateToString` to use `Date` objects later, these tests should catch it:

```typescript
import { describe, it, expect } from &quot;vitest&quot;
import dateToString from &quot;@/utils/dateToString&quot;

describe(&quot;dateToString&quot;, () =&gt; {
  it(&quot;reformats YYYY-MM-DD to DD/MM/YYYY&quot;, () =&gt; {
    expect(dateToString(&quot;2026-02-27&quot;)).toBe(&quot;27/02/2026&quot;)
  })

  it(&quot;returns empty string for falsy input&quot;, () =&gt; {
    expect(dateToString(&quot;&quot;)).toBe(&quot;&quot;)
  })

  it(&quot;handles date strings with time components&quot;, () =&gt; {
    expect(dateToString(&quot;2026-02-27T23:00:00Z&quot;)).toBe(&quot;27/02/2026&quot;)
    expect(dateToString(&quot;2026-02-27T00:00:00+01:00&quot;)).toBe(&quot;27/02/2026&quot;)
  })
})
```

The tests above verify correctness, but they won&apos;t catch a timezone regression. If someone refactors `dateToString` to use `new Date()`, the tests still pass on any machine where the local timezone has a positive UTC offset. They&apos;d only fail in CI if the runner happens to be in a negative-offset timezone.

To actually catch that, the standard way in a Node.js environment like [Vitest](https://vitest.dev/api/vi.html#vi-setsystemtime) is to set the `TZ` environment variable. In modern Node.js versions, you can even switch timezones dynamically between tests:

```typescript
import { it, expect } from &quot;vitest&quot;

it.each([
  { tz: &quot;America/New_York&quot;, expected: &quot;26/02/2026&quot; },
  { tz: &quot;Europe/Rome&quot;, expected: &quot;27/02/2026&quot; },
  { tz: &quot;Asia/Tokyo&quot;, expected: &quot;27/02/2026&quot; },
] as const)(&quot;produces $expected when TZ=$tz&quot;, ({ tz, expected }) =&gt; {
  process.env.TZ = tz
  expect(dateToString(&quot;2026-02-27&quot;)).toBe(expected)
})
```

_(Note: While Node.js 22+ handles `process.env.TZ` changes dynamically, in older versions the timezone was cached after the first `Date` was created. If you are on an older stack, you might need to run separate test processes or set the timezone at the very top of the file.)_

---

## Why this catches everyone

`new Date(&apos;2026-02-27&apos;)` parses as UTC midnight: `2026-02-27T00:00:00.000Z`. This is per [ECMA-262 §21.4.3.2](https://tc39.es/ecma262/#sec-date): date-only strings are interpreted as UTC.

Meanwhile, `new Date(2026, 1, 27)` creates a date in the _local_ timezone. Two constructors, two different timezone behaviors. I&apos;ve known this for years and still almost missed it.

The original code was written and tested in Rome. UTC midnight is 1:00 AM in Rome, still on the 27th, so the bug was invisible during development. It only surfaces in negative-UTC timezones, where midnight UTC falls on the previous calendar day. I develop in Rome. Of course I didn&apos;t see it.

---

## What I took away

**If you&apos;re displaying a calendar date, don&apos;t put it through `Date`.** Split the string, rearrange, render. But document why. The next developer will look at `dateToString` and think &quot;why didn&apos;t they just use `toLocaleDateString`?&quot; The comment is part of the answer. The `timezone-mock` tests are the rest: they show exactly what breaks if someone swaps in `new Date()`.

**Timezone bugs are invisible from positive-UTC locations.** UTC midnight is still the right calendar day in Western Europe and East Asia. The bug only shows up in the Americas.

**Chrome DevTools MCP collapses the debugging loop.** The traditional timezone-toggle routine takes minutes of clicking around per cycle. The agent tested four timezones in one console evaluation, then the same simulation verified the fix. The debugging context stayed in one place instead of splitting across browser tabs, terminals, and my head.

---

_Debugged with [Chrome DevTools MCP](https://github.com/ChromeDevTools/chrome-devtools-mcp), Vue 3 and Nuxt 4, and a healthy suspicion of `new Date()`._</pre>]]></content:encoded>
</item>
</channel>
</rss>