<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Danny</title><link>https://blog.dmcc.io/journal/</link><description>Notes on work, life, and the tools in between.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><atom:link href="https://blog.dmcc.io/journal/index.xml" rel="self" type="application/rss+xml"/><item><title>Obsidian &amp; Claude: a match made in heaven</title><link>https://blog.dmcc.io/journal/obsidian-claude-personal-assistant/</link><pubDate>Wed, 18 Mar 2026 00:10:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/obsidian-claude-personal-assistant/#2026-03-19</guid><description>&lt;p>Before I get into this: there are probably a hundred ways to achieve what I&amp;rsquo;m about to describe, and most of them are equally valid. My setup didn&amp;rsquo;t arrive fully formed. It evolved over several months of trying things, breaking things, and slowly landing somewhere I&amp;rsquo;m actually happy with. If your approach is different, that&amp;rsquo;s fine. This is just what works for me.&lt;/p>
&lt;p>I wrote recently about &lt;a href="https://blog.dmcc.io/journal/ai-has-fixed-my-productivity/">how AI has fixed my productivity&lt;/a>. This post is about how, specifically.&lt;/p>
&lt;p>For a while, I&amp;rsquo;ve been using Claude for note-taking assistance, drafting, and the odd bit of code review. It was useful but fundamentally shallow: every conversation started from scratch, with no idea who I am, what I&amp;rsquo;m working on, or what happened in yesterday&amp;rsquo;s meetings. I&amp;rsquo;d spend more time explaining context than actually getting anything done.&lt;/p>
&lt;p>That&amp;rsquo;s the problem this setup solves. It&amp;rsquo;s not one clever hack but a few things working together: a structured Obsidian vault, reliable sync, a headless server, and Claude with enough connections that it can actually understand the shape of my day.&lt;/p>
&lt;p>&lt;img src="https://blog.dmcc.io/img/journal/obsidian-claude.png" alt="Obsidian vault open alongside Claude Code in the terminal">&lt;/p>
&lt;h2 id="starting-with-the-structure">Starting with the structure&lt;/h2>
&lt;p>The vault layout came almost entirely from a &lt;a href="https://www.thethinkers.club/p/how-i-structure-obsidian-and-claude">post by James Bedford on The Thinkers Club&lt;/a>. His five-folder structure is simple enough to stick to long term:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Polaris&lt;/strong>: current goals, priorities, and what&amp;rsquo;s top of mind&lt;/li>
&lt;li>&lt;strong>Logs&lt;/strong>: daily, weekly, monthly, and quarterly notes&lt;/li>
&lt;li>&lt;strong>Commonplace&lt;/strong>: individual thought notes and knowledge&lt;/li>
&lt;li>&lt;strong>Outputs&lt;/strong>: shared writing, meeting transcripts, anything AI-generated&lt;/li>
&lt;li>&lt;strong>Utilities&lt;/strong>: templates and reference material&lt;/li>
&lt;/ul>
&lt;p>The key principle I took from his approach: keep AI-generated content out of the main knowledge graph. Granola meeting notes, AI summaries, exported content: all of it lives in Outputs rather than mixed into Commonplace. The knowledge graph stays clean because it only contains things I actually wrote.&lt;/p>
&lt;p>Tags do a lot of the navigation work. I use nested hierarchies like &lt;code>#work/project-name&lt;/code> and &lt;code>#tech/networking&lt;/code>, which are portable (they survive app changes), queryable, and work well with Claude for targeted analysis.&lt;/p>
&lt;h2 id="the-sync-question">The sync question&lt;/h2>
&lt;p>Running Obsidian across multiple devices — laptop, phone, and a Linux server — means you need a sync strategy. I spent some time looking at the options: git-based sync, Syncthing, iCloud, various self-hosted approaches. Each had friction. Git sync works beautifully until you make a quick edit on your phone and forget to commit. Syncthing is fine but needs babysitting.&lt;/p>
&lt;p>I ended up on &lt;a href="https://obsidian.md/sync">Obsidian Sync&lt;/a>. It&amp;rsquo;s not free and it&amp;rsquo;s not self-hosted, but it just works. Notes appear on every device within a few seconds. The mobile client is properly maintained. There are no edge cases to think about. For a system that&amp;rsquo;s supposed to reduce friction, adding sync friction felt like the wrong trade.&lt;/p>
&lt;p>The one nuance: Obsidian Sync and automated writes can conflict if writes aren&amp;rsquo;t handled carefully. For a while I had Claude editing vault files directly via the filesystem, which worked fine, until it didn&amp;rsquo;t. Sync errors started cropping up: partial writes propagating to other devices, files getting into odd states. Switching to the MCP approach (more on that below) fixed this entirely, because the server uses atomic writes: write to a temp file first, then rename it into place. Sync never sees a half-written file.&lt;/p>
&lt;h2 id="headless-obsidian-on-linux">Headless Obsidian on Linux&lt;/h2>
&lt;p>Obsidian ships an &lt;a href="https://help.obsidian.md/headless">official headless mode&lt;/a>, designed specifically for running on servers without a display. I&amp;rsquo;m running it on a cloud-hosted Ubuntu server on Hetzner: Obsidian headless starts on boot, connects to Obsidian Sync, and keeps the vault directory current. No desktop, no GUI, just the sync process doing its job in the background.&lt;/p>
&lt;p>The vault ends up as a folder of markdown files on disk, always up to date, ready for the MCP server to read and write.&lt;/p>
&lt;h2 id="the-web-mcp">The web MCP&lt;/h2>
&lt;p>Most Obsidian MCP servers are local stdio servers: they work when Claude Code is running on the same machine as your vault. That&amp;rsquo;s a reasonable setup on a single machine, but it means Claude.ai on the web and Claude on your phone can&amp;rsquo;t reach your vault at all.&lt;/p>
&lt;p>&lt;a href="https://github.com/jimprosser/obsidian-web-mcp">obsidian-web-mcp&lt;/a> by Jim Prosser takes a different approach. It&amp;rsquo;s a persistent HTTP service that runs on the machine where your vault lives, authenticated via OAuth 2.0, and accessible over the network. The vault files never leave your machine.&lt;/p>
&lt;p>For remote access, the README suggests Cloudflare Tunnel. I already run &lt;a href="https://tailscale.com/">Tailscale&lt;/a> across all my devices, so I went that route instead: the MCP server listens on the local network, Tailscale makes it reachable from my phone, laptop, and anywhere else I&amp;rsquo;m signed in. No inbound ports, no public IP, same security guarantees. If you&amp;rsquo;re already on Tailscale, it&amp;rsquo;s the path of least resistance.&lt;/p>
&lt;p>The tool set covers everything you&amp;rsquo;d need: reading files, searching full text, querying frontmatter, writing, moving, and batch operations. Soft deletes move files to &lt;code>.trash/&lt;/code> rather than permanently removing them, which matches Obsidian&amp;rsquo;s own behaviour. Setting it up took about an hour. The MCP server runs as a systemd service on my Linux server, pointed at the vault directory that Obsidian Sync keeps current.&lt;/p>
&lt;h2 id="the-full-picture">The full picture&lt;/h2>
&lt;p>With the vault accessible as an MCP tool, Claude becomes genuinely context-aware. But the vault is only one piece. I also have Claude connected to:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Linear&lt;/strong>: work task management, where all my open issues live&lt;/li>
&lt;li>&lt;strong>Granola&lt;/strong>: records and transcribes work meetings, then exports them to the Obsidian vault automatically via my &lt;a href="https://github.com/dannymcc/Granola-to-Obsidian">Granola-to-Obsidian&lt;/a> extension&lt;/li>
&lt;li>&lt;strong>Work calendar&lt;/strong>: read and write access, so Claude can check availability, flag changes, and create events&lt;/li>
&lt;li>&lt;strong>Slack&lt;/strong>: connected to my work workspace, so Claude can read DMs, send messages, and flag conversations that need a reply&lt;/li>
&lt;/ul>
&lt;p>That combination is where things get interesting. Every work meeting I attend ends up as a structured note in &lt;code>4. Outputs/Granola/&lt;/code>. My open tasks across every project exist in Linear. My daily notes, thinking, and priorities live in Obsidian. Claude can see all of it. I also have Claude connected to my work calendar.&lt;/p>
&lt;h2 id="what-this-actually-looks-like-in-practice">What this actually looks like in practice&lt;/h2>
&lt;p>Here&amp;rsquo;s a concrete example. I can tell Claude: &amp;ldquo;arrange a call with Paul and set the agenda to any open or relevant topics we have.&amp;rdquo;&lt;/p>
&lt;p>From that single sentence, Claude will go and do all of this without further prompting. It checks our recent meeting notes in the Obsidian vault, where every Granola transcript lives thanks to the export extension, to see what we&amp;rsquo;ve already covered and what was left unresolved. It searches Linear for any open issues assigned to either of us that would be worth discussing. It looks at both our calendars to find a slot that works. Then it creates the calendar invitation, attaches a ready-made agenda of open topics pulled from those sources, and sends it.&lt;/p>
&lt;p>If I want, it can also drop Paul a Slack message to give him a heads up and ask if there&amp;rsquo;s anything he&amp;rsquo;d like to add.&lt;/p>
&lt;p>The reason that works isn&amp;rsquo;t clever prompting. It&amp;rsquo;s that every source Claude needs is connected and indexed: meeting history in Granola, tasks in Linear, calendar events directly, and any relevant notes in Obsidian. The context is already there. The prompt just tells it what to do with it.&lt;/p>
&lt;h2 id="quick-capture-on-the-go">Quick capture on the go&lt;/h2>
&lt;p>One of the inputs into this whole system is an iPhone shortcut triggered by the Action Button. Press it, type whatever&amp;rsquo;s on my mind, done. The shortcut drops the note straight into &lt;code>3. Commonplace/Personal/Quick Thoughts&lt;/code> in the vault.&lt;/p>
&lt;p>What I type varies wildly. Sometimes it&amp;rsquo;s a task I don&amp;rsquo;t want to forget. Sometimes it&amp;rsquo;s a movie review I want to write up while it&amp;rsquo;s fresh. Sometimes it&amp;rsquo;s a half-formed idea. It doesn&amp;rsquo;t matter, and that&amp;rsquo;s the point: the only rule is zero friction at the point of capture. I don&amp;rsquo;t categorise it, tag it, or decide where it belongs. I just type and close my phone.&lt;/p>
&lt;p>Claude handles the rest during the morning routine. It reads whatever landed in Quick Thoughts overnight and decides what to do with it. Work tasks become Linear issues and get added to the daily note. Reminders become checkboxes. A movie review gets filed as a note. Anything ambiguous gets surfaced to me with a suggested action rather than silently misfiled.&lt;/p>
&lt;h2 id="the-morning-routine">The morning routine&lt;/h2>
&lt;p>The part that pulled this together was building an automated morning routine. A launchd job fires at 8am and runs a single Claude command. Claude then:&lt;/p>
&lt;ol>
&lt;li>Checks today&amp;rsquo;s daily note, creating it from template if it doesn&amp;rsquo;t exist yet&lt;/li>
&lt;li>Compares the calendar against what&amp;rsquo;s already in the note, flagging any changes overnight&lt;/li>
&lt;li>Processes anything in Quick Thoughts: tasks become checkboxes and Linear issues, everything else gets filed or surfaced&lt;/li>
&lt;li>Pulls every open Linear issue assigned to me across all teams and adds anything not already listed to the Tasks section, sorted by urgency&lt;/li>
&lt;li>Checks Slack for any DMs where the last message is from someone else and I haven&amp;rsquo;t replied, flagging each one with the gist of what they need&lt;/li>
&lt;li>Surfaces a short briefing: calendar changes, extracted tasks, overdue issues, unreplied DMs, and a one-line reminder of the day&amp;rsquo;s key meetings&lt;/li>
&lt;/ol>
&lt;p>The daily note ends up pre-populated before I&amp;rsquo;ve looked at a screen. Meetings have log entries ready to fill in. Tasks from both Linear and overnight captures are already there as checkboxes.&lt;/p>
&lt;p>At the end of the day, I just tell Claude &amp;ldquo;let&amp;rsquo;s wrap up today and get ready for tomorrow.&amp;rdquo; It summarises the meetings from the vault into the Logs section of the daily note, then creates tomorrow&amp;rsquo;s note pre-populated with whatever Linear items are due.&lt;/p>
&lt;h2 id="where-this-lands">Where this lands&lt;/h2>
&lt;p>None of this required building anything particularly complex. The vault structure is someone else&amp;rsquo;s good idea, Obsidian Sync is an off-the-shelf product, obsidian-web-mcp is an open source project, and Claude handles the orchestration. The work was in connecting the pieces and writing clear enough instructions (in a &lt;code>CLAUDE.md&lt;/code> file in the vault) that Claude knows what to do with the context it has.&lt;/p>
&lt;p>The result is a morning briefing that knows what meetings I have, what I was supposed to finish yesterday, and what landed in my capture queue overnight. It&amp;rsquo;s not magic, but it does mean the first five minutes of my day involve less context reconstruction and more actually thinking.&lt;/p>
&lt;p>Beyond the routine, what I&amp;rsquo;ve ended up with is a genuinely resourceful personal assistant that understands the full context of my working life, and a comprehensive record of pretty much everything I do at work. Every meeting, every task, every quick thought captured on the go: it&amp;rsquo;s all there, connected, and queryable. That&amp;rsquo;s not something I had before, and it turns out it&amp;rsquo;s more useful than I expected.&lt;/p></description></item><item><title>Daemon: a 2006 techno-thriller that reads like a 2026 product roadmap</title><link>https://blog.dmcc.io/journal/daemon-daniel-suarez-predictions/</link><pubDate>Sat, 28 Feb 2026 00:10:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/daemon-daniel-suarez-predictions/#2026-02-28</guid><description>&lt;p>&lt;em>Fair warning: this post contains spoilers for both Daemon and Freedom™. Then again, the books came out twenty years ago. If you haven&amp;rsquo;t read them by now, you probably weren&amp;rsquo;t going to. I highly recommend them both though, if you haven&amp;rsquo;t read them yet.&lt;/em>&lt;/p>
&lt;p>I finished re-reading Daniel Suarez&amp;rsquo;s &lt;em>Daemon&lt;/em> and its sequel &lt;em>Freedom™&lt;/em> a few weeks ago. I first picked them up years back and thought they were solid techno-thrillers with some wild ideas baked into an entertaining plot. Reading them again in 2026, they&amp;rsquo;re just as gripping, but for somewhat different reasons. The realism has caught up in a way I wasn&amp;rsquo;t expecting. When I first read these books, I took them as clever speculation of what the future may look like. Now I&amp;rsquo;m reading them and thinking: yeah, that exists. That too. And that. The fiction hasn&amp;rsquo;t aged, the real world has just gone ahead and built most of it.&lt;/p>
&lt;p>The premise is straightforward: Matthew Sobol, a dying game developer, leaves behind a distributed AI program that activates after his death. The Daemon, as it&amp;rsquo;s called, begins infiltrating systems, recruiting operatives through an online game-like interface, and systematically restructuring society. In &lt;em>Freedom™&lt;/em>, the sequel, that restructuring plays out in full: decentralised communities, alternative economies, mesh networks, and a population split between those plugged into the new system and those clinging to the old one.&lt;/p>
&lt;p>Suarez self-published &lt;em>Daemon&lt;/em> in 2006. That bears repeating. 2006. YouTube was a year old. The iPhone didn&amp;rsquo;t exist yet. And this guy was writing about autonomous vehicles, augmented reality glasses, voice-controlled AI agents, distributed botnets acting with real-world consequences, and desktop fabrication units. Not as far-future sci-fi set in 2150, but as things that were five to ten years away.&lt;/p>
&lt;h2 id="the-tech-that-landed">The tech that landed&lt;/h2>
&lt;p>The Daemon&amp;rsquo;s entire existence starts with what is essentially a cron job. Sobol&amp;rsquo;s program sits dormant, scraping news headlines, waiting for a specific trigger: reports of his own death. When it finds them, it wakes up and starts executing. It wasn&amp;rsquo;t a sentient AI gone rogue with a dramatic moment it becomes into being. Just a script polling RSS feeds on a schedule, pattern-matching against text, and firing off the next step in a chain. I had something similar with &lt;a href="https://openclaw.ai">OpenClaw&lt;/a> for a while. Not the assassinations, obviously, but the same fundamental architecture of scheduled tasks that wake up, pull information from the internet, process it, and take action without any human prompting. Morning briefings, inbox sweeps, periodic research jobs. The Daemon&amp;rsquo;s trigger mechanism felt sinister in 2006. Now it&amp;rsquo;s a feature you can configure in a YAML file. Yep, I know what you&amp;rsquo;re thinking - we&amp;rsquo;ve had cron for a long time and this part was possible even before the book was written - but this is just the first chapter of the book.&lt;/p>
&lt;p>Then there are the autonomous machines. Sobol&amp;rsquo;s Daemon deploys &amp;ldquo;AutoM8s&amp;rdquo;: driverless vehicles that transport operatives and, in the book&amp;rsquo;s darker moments, act as weapons. It also uses robotic ground units for surveillance and enforcement. In 2006, this was pure fiction. Now Boston Dynamics has Spot, a quadruped robot dog that autonomously navigates terrain, avoids obstacles, and self-charges. Their Atlas humanoid can do backflips, parkour courses, and 540-degree inverted flips. These are real machines you can watch on YouTube doing things that would have read as absurd twenty years ago. Suarez&amp;rsquo;s vision of autonomous robots patrolling and operating independently isn&amp;rsquo;t a prediction anymore, it&amp;rsquo;s a product catalogue.&lt;/p>
&lt;p>The always-connected vehicle is another one. In Daemon, the AutoM8s are permanently networked, receiving instructions and sharing data in real time. Every Tesla on the road today is essentially this. Always online, streaming telemetry back to the mothership, receiving over-the-air updates, and feeding its camera data into a collective neural network. The car you&amp;rsquo;re driving is a node in someone else&amp;rsquo;s distributed system. Sobol would have appreciated the irony of people voluntarily buying into that.&lt;/p>
&lt;p>One of the creepier technologies in the books is WiFi-based surveillance, using wireless signals to detect and track people through walls. Suarez wrote about this as a covert capability the Daemon could exploit. Carnegie Mellon researchers have since built exactly that. Their &amp;ldquo;DensePose from WiFi&amp;rdquo; system uses standard WiFi router signals to reconstruct human poses in real time, even through solid walls. The reflected signals carry enough information about body shape and movement that a neural network can map what you&amp;rsquo;re doing in a room without a single camera. It works through drywall, wood, and even concrete up to a point, and none of this is classified military tech. It&amp;rsquo;s published academic research that anyone can read.&lt;/p>
&lt;p>The acoustic weapon is probably the one that catches people off guard the most. In Daemon, there&amp;rsquo;s a directed sound system that can make audio appear to come from right beside you while no one else in the room hears a thing. It sounds like science fiction until you look up parametric speakers. Companies like Holosonics have been selling &amp;ldquo;Audio Spotlight&amp;rdquo; systems for years. They work by emitting &amp;ldquo;modulated ultrasonic beams that demodulate into audible sound only within a tight, targeted area&amp;rdquo; - I&amp;rsquo;ve experienced these in airports, but have no idea what that quote actually means. Museums, airports, and retailers already use them, and the military has explored them for crowd control. The effect is exactly what Suarez described, sound that seems to materialise out of thin air, audible only to the person standing in the beam, and you can buy one commercially right now.&lt;/p>
&lt;p>The social dynamics might be the most on-the-nose parallel of all. In the books, the Daemon recruits human operatives to carry out tasks in the physical world. It finds people, assigns them work, and pays them through its own system. The humans don&amp;rsquo;t fully understand the bigger picture. They just complete their tasks and collect their reward. In January 2026, a site called &lt;a href="https://rentahuman.ai/">RentAHuman.ai&lt;/a> launched. It&amp;rsquo;s a platform where OpenClaw AI agents can hire actual people to perform tasks for them. Humans sign up with their skills and hourly rate, AI agents post jobs, and people complete them for payment in stablecoins. Over 40,000 people registered within days. The framing is different, obviously. It&amp;rsquo;s gig work, not a shadowy network of mindless humans - arguably. But the underlying structure is identical. AI systems delegating physical-world tasks to human operatives who sign up voluntarily, motivated by compensation and a sense of participation in something larger. Suarez wrote it as dystopian fiction that, in 2006 seemed like only the insane would enroll. We built it as a startup and it got very popular, very quickly.&lt;/p>
&lt;h2 id="what-he-got-wrong-sort-of">What he got wrong (sort of)&lt;/h2>
&lt;p>The 10% that hasn&amp;rsquo;t happened is mostly about scale and centralisation. Sobol&amp;rsquo;s Daemon is a single, coherent system with an architect&amp;rsquo;s intent behind every action. Real distributed systems don&amp;rsquo;t work like that. AI development has been messy, competitive, and fragmented across hundreds, perhaps even thousands, of companies and research labs. There&amp;rsquo;s no singular Daemon pulling strings, just a chaotic landscape of overlapping systems with no one fully in control. Which, depending on your perspective, might actually be worse.&lt;/p>
&lt;p>The weaponised autonomous vehicles haven&amp;rsquo;t materialised in the way Suarez imagined either, though military drones certainly have. The line between his fiction and real-world drone warfare is thinner than most people would be comfortable with.&lt;/p>
&lt;p>And the neat resolution in &lt;em>Freedom™&lt;/em>, where Darknet communities build something genuinely better, still feels like the most fictional part of the whole thing. We&amp;rsquo;ve got the decentralised technology. We&amp;rsquo;ve got the mesh networks and the alternative currencies. What we haven&amp;rsquo;t got is the social cohesion to do anything coherent with them. Crypto became a speculative casino with massive peaks and equal troughs. The tools exist, but the utopian bit remains out of reach.&lt;/p>
&lt;h2 id="why-it-matters-now">Why it matters now&lt;/h2>
&lt;p>Suarez wasn&amp;rsquo;t writing from some academic ivory tower or speculating about technology he&amp;rsquo;d never touched. He was an IT consultant who spent years working with Fortune 1000 companies, and you can feel that experience on every page. He understood how systems actually work, how they fail, and how they get exploited, which is what makes re-reading both books such a strange experience. He wasn&amp;rsquo;t guessing at any of this. He was extrapolating from things he could already see forming, and doing it with an accuracy that I genuinely wouldn&amp;rsquo;t have believed twenty years ago.&lt;/p>
&lt;p>If you haven&amp;rsquo;t read &lt;em>Daemon&lt;/em> and &lt;em>Freedom™&lt;/em>, go and read them. I track everything I read on &lt;a href="https://blog.dmcc.io/books/">Hardcover&lt;/a>, and both of these are easy five-star picks. They&amp;rsquo;re fantastic books on their own merits. The pacing is relentless, the technical detail is sharp without being dry, and the plot keeps pulling you forward. I&amp;rsquo;d recommend them even if none of the technology had come true.&lt;/p>
&lt;p>But it has, and not gradually over twenty years. The pace is accelerating. Half the parallels I&amp;rsquo;ve listed in this post didn&amp;rsquo;t exist even twelve months ago. OpenClaw&amp;rsquo;s cron system, RentAHuman.ai, the latest generation of Boston Dynamics robots: all 2025 or 2026 developments. The gap between Suarez&amp;rsquo;s fiction and our reality is closing faster each year, and that makes the books hit differently every time you revisit them. I suspect they&amp;rsquo;ll hit differently again in another twelve months, and I can&amp;rsquo;t wait to re-read them then.&lt;/p></description></item><item><title>AI has fixed my productivity</title><link>https://blog.dmcc.io/journal/ai-has-fixed-my-productivity/</link><pubDate>Wed, 18 Feb 2026 00:10:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/ai-has-fixed-my-productivity/#2026-02-18</guid><description>&lt;p>A Fortune survey doing the rounds this week has &lt;a href="https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/">thousands of CEOs admitting&lt;/a> that AI has had no measurable impact on employment or productivity. It&amp;rsquo;s being treated as vindication by the sceptics and a crisis by the vendors. I read it and thought: these people are using AI wrong.&lt;/p>
&lt;p>I use AI tools every day. Claude helps me write code. OpenClaw handles the kind of loose, conversational thinking I used to do on paper or in my head. Granola transcribes my meetings and a &lt;a href="https://github.com/dannymcc/Granola-to-Obsidian">plugin I built&lt;/a> pipes the notes straight into Obsidian. My email gets triaged before I look at it. Research gets compiled in minutes instead of hours. This stuff has genuinely changed how I work, and I don&amp;rsquo;t think I could go back.&lt;/p>
&lt;p>The CEO survey doesn&amp;rsquo;t prove AI is failing. It proves that most organisations have no idea how to deploy it.&lt;/p>
&lt;h2 id="what-actually-changed">What actually changed&lt;/h2>
&lt;p>The gains aren&amp;rsquo;t where the enterprise pitch decks said they&amp;rsquo;d be. Nobody handed me an AI tool that &amp;ldquo;transformed my workflow&amp;rdquo; in one go. What happened was slower and more specific: a dozen small frictions disappeared, and the cumulative effect was significant.&lt;/p>
&lt;p>Meeting notes are the obvious one. Before Granola, I&amp;rsquo;d either scribble while half-listening or pay attention and try to reconstruct things afterwards from memory. Both were bad. Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That&amp;rsquo;s 20 minutes a day I got back, every day, without thinking about it.&lt;/p>
&lt;p>Code generation changed my relationship with side projects entirely. I&amp;rsquo;ve shipped things this year that I simply wouldn&amp;rsquo;t have started before: small tools, automations, scripts that solve a specific problem in an afternoon instead of a weekend. The AI doesn&amp;rsquo;t write production-quality code on its own, but it gets me from &amp;ldquo;I know what I want&amp;rdquo; to &amp;ldquo;I have something running&amp;rdquo; in minutes instead of hours. That speed difference matters. It&amp;rsquo;s the difference between &amp;ldquo;I&amp;rsquo;ll build that someday&amp;rdquo; and actually building it.&lt;/p>
&lt;p>Summarising long documents, compiling research, triaging email: none of these are exciting. But they used to eat real time. Now they don&amp;rsquo;t. The compound effect of reclaiming 30 or 40 minutes across a day is that my actual focus hours go further. I wrote about &lt;a href="https://blog.dmcc.io/journal/focus/">protecting those hours&lt;/a> last year, and AI tools have turned out to be one of the better ways to do it.&lt;/p>
&lt;h2 id="why-the-survey-got-it-wrong">Why the survey got it wrong&lt;/h2>
&lt;p>The CEO survey is measuring organisational productivity, which is a completely different thing from individual productivity. Most companies deployed AI by buying enterprise licences and hoping for the best. Copilot seats for every developer. ChatGPT access for every department. No training, no workflow integration, no clarity on what problems the tools were supposed to solve.&lt;/p>
&lt;p>That&amp;rsquo;s not an AI failure. That&amp;rsquo;s a deployment failure. It&amp;rsquo;s a silly analogy, but you wouldn&amp;rsquo;t buy everyone in the company a piano and then wonder why not everyone is a musician a month later. But that&amp;rsquo;s essentially what happened with AI in most organisations, and it hopefully illustrates the point.&lt;/p>
&lt;p>The productivity gains I&amp;rsquo;ve found came from figuring out, through months of trial and error, exactly where AI fits into my specific workflow. Not the generic &amp;ldquo;write me an email&amp;rdquo; stuff. The narrow, targeted things: transcription, code scaffolding, document summarisation, research triage. Each one required experimentation to get right. Most people in most companies haven&amp;rsquo;t done that work, and their employers aren&amp;rsquo;t helping them do it.&lt;/p>
&lt;p>There&amp;rsquo;s also a measurement problem. My 20 minutes saved on meeting notes doesn&amp;rsquo;t show up in a quarterly report. The side project I shipped in a day instead of a week doesn&amp;rsquo;t register as a productivity metric. The compounding effect of less friction across dozens of small tasks is invisible to anyone looking at spreadsheets. CEOs are looking for step-change improvements because that&amp;rsquo;s what they were sold. The actual gains are granular and personal, which makes them hard to count and easy to dismiss.&lt;/p>
&lt;h2 id="the-uncomfortable-bit">The uncomfortable bit&lt;/h2>
&lt;p>None of this is free. Every AI tool that makes me more productive does so by ingesting my work. My meeting transcripts, my code, my half-formed ideas, my entire stream of consciousness on a given day: all of it flows through systems I don&amp;rsquo;t own and can&amp;rsquo;t audit.&lt;/p>
&lt;p>I&amp;rsquo;ve spent the past year &lt;a href="https://blog.dmcc.io/journal/2025_my_privacy_reboot/">moving away from surveillance platforms&lt;/a>. I replaced Google Photos with Ente, Gmail with Migadu, WhatsApp with Signal. I run my own &lt;a href="https://blog.dmcc.io/journal/xmpp-turn-stun-coturn-prosody/">XMPP server&lt;/a>. I self-host my password manager. And yet I willingly feed more context into AI tools each day than Google ever passively collected from me.&lt;/p>
&lt;p>It&amp;rsquo;s a contradiction I haven&amp;rsquo;t resolved. The productivity gains are real enough that I&amp;rsquo;m not willing to give them up, but the privacy cost is real too, and I notice it. For companies putting their entire workforce&amp;rsquo;s output through third-party AI, the data governance implications are enormous. Most organisations haven&amp;rsquo;t thought about this seriously, which is another reason the CEO survey results look the way they do: they adopted the tools without understanding what they were trading.&lt;/p>
&lt;p>I&amp;rsquo;ve settled into an uneasy position: AI for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It&amp;rsquo;s not philosophically clean. It&amp;rsquo;s just honest.&lt;/p>
&lt;h2 id="the-real-gap">The real gap&lt;/h2>
&lt;p>The gap isn&amp;rsquo;t between AI&amp;rsquo;s potential and its capability. The tools are good enough. The gap is between having access to AI and knowing how to use it well. That&amp;rsquo;s an individual skill, built through experimentation, and it doesn&amp;rsquo;t scale the way enterprise software purchases do.&lt;/p>
&lt;p>I&amp;rsquo;ll keep using these tools. They&amp;rsquo;ve made me measurably more productive in ways I can point to: time saved, projects shipped, focus protected. The CEOs in that survey aren&amp;rsquo;t wrong about what they&amp;rsquo;re seeing in their organisations. They&amp;rsquo;re just wrong about what it means. AI hasn&amp;rsquo;t failed. Most companies just haven&amp;rsquo;t figured it out yet.&lt;/p></description></item><item><title>Running My Own XMPP Server</title><link>https://blog.dmcc.io/journal/xmpp-turn-stun-coturn-prosody/</link><pubDate>Mon, 16 Feb 2026 00:00:10 +0000</pubDate><guid>https://blog.dmcc.io/journal/xmpp-turn-stun-coturn-prosody/#2026-02-14</guid><description>&lt;p>About a year ago I &lt;a href="https://blog.dmcc.io/journal/2025_my_privacy_reboot/">moved my personal messaging to Signal&lt;/a> as part of a broader push to take ownership of my digital life. That went well. Most of my contacts made the switch, and I&amp;rsquo;m now at roughly 95% Signal for day-to-day conversations. But Signal is still one company running one service. If they shut down tomorrow or change direction, I&amp;rsquo;m back to square one.&lt;/p>
&lt;p>XMPP fixes that. It&amp;rsquo;s federated, meaning your server talks to other XMPP servers automatically and you&amp;rsquo;re never locked into a single provider. Your messages live on your hardware. The protocol has been around since 1999 and it&amp;rsquo;s not going anywhere. I&amp;rsquo;d tried XMPP years ago and bounced off it, but the clients have come a long way since then. &lt;a href="https://monal-im.org/">Monal&lt;/a> and &lt;a href="https://conversations.im/">Conversations&lt;/a> are genuinely nice to use now.&lt;/p>
&lt;p>This post covers everything I did to get a fully working XMPP server running with &lt;a href="https://prosody.im/">Prosody&lt;/a> in Docker, from DNS records through to voice calls.&lt;/p>
&lt;h2 id="prerequisites">Prerequisites&lt;/h2>
&lt;ul>
&lt;li>A server with Docker and Docker Compose&lt;/li>
&lt;li>A domain you control&lt;/li>
&lt;li>TLS certificates (Let&amp;rsquo;s Encrypt works well)&lt;/li>
&lt;/ul>
&lt;h2 id="dns-records">DNS records&lt;/h2>
&lt;p>XMPP uses SRV records to let clients and other servers find yours. You&amp;rsquo;ll need these in your DNS:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">_xmpp-client._tcp.xmpp.example.com SRV 0 5 5222 xmpp.example.com.
_xmpp-server._tcp.xmpp.example.com SRV 0 5 5269 xmpp.example.com.
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Port 5222 is for client connections, 5269 is for server-to-server federation. You&amp;rsquo;ll also want an A record pointing &lt;code>xmpp.example.com&lt;/code> to your server&amp;rsquo;s IP.&lt;/p>
&lt;p>If you want HTTP file uploads (I&amp;rsquo;d recommend it), add a CNAME or A record for &lt;code>upload.xmpp.example.com&lt;/code> pointing to the same server. Same for &lt;code>conference.xmpp.example.com&lt;/code> if you want group chats with a clean subdomain, though Prosody handles this internally either way.&lt;/p>
&lt;h2 id="tls-certificates">TLS certificates&lt;/h2>
&lt;p>Prosody won&amp;rsquo;t start without certificates. I use Let&amp;rsquo;s Encrypt with the Cloudflare DNS challenge so I don&amp;rsquo;t need to expose port 80:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">docker run --rm &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> -v ~/docker/xmpp/certs:/etc/letsencrypt &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> -v ~/docker/xmpp/cloudflare.ini:/etc/cloudflare.ini:ro &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> certbot/dns-cloudflare certonly &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> --dns-cloudflare &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> --dns-cloudflare-credentials /etc/cloudflare.ini &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> -d xmpp.example.com
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The &lt;code>cloudflare.ini&lt;/code> file contains your API token:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-ini" data-lang="ini">&lt;span style="color:#4070a0">dns_cloudflare_api_token&lt;/span> &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">your-cloudflare-api-token&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>After certbot runs, fix the permissions so Prosody can read the certs:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">chmod -R &lt;span style="color:#40a070">755&lt;/span> ~/docker/xmpp/certs/live/ ~/docker/xmpp/certs/archive/
chmod &lt;span style="color:#40a070">644&lt;/span> ~/docker/xmpp/certs/archive/xmpp.example.com/*.pem
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Set up a cron to renew monthly:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#40a070">0&lt;/span> &lt;span style="color:#40a070">3&lt;/span> &lt;span style="color:#40a070">1&lt;/span> * * docker run --rm -v ~/docker/xmpp/certs:/etc/letsencrypt &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> -v ~/docker/xmpp/cloudflare.ini:/etc/cloudflare.ini:ro &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> certbot/dns-cloudflare renew &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> --dns-cloudflare-credentials /etc/cloudflare.ini &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> docker restart xmpp
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="the-docker-setup">The Docker setup&lt;/h2>
&lt;p>The &lt;code>docker-compose.yml&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="color:#007020;font-weight:bold">services&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">prosody&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">image&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>prosodyim/prosody:&lt;span style="color:#40a070">13.0&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">container_name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>xmpp&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">restart&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>unless-stopped&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">ports&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#4070a0">&amp;#34;5222:5222&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#4070a0">&amp;#34;5269:5269&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">volumes&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- prosody-data:/var/lib/prosody&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- ./prosody.cfg.lua:/etc/prosody/prosody.cfg.lua:ro&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- ./certs/live/xmpp.example.com/fullchain.pem:/etc/prosody/certs/xmpp.example.com.crt:ro&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- ./certs/live/xmpp.example.com/privkey.pem:/etc/prosody/certs/xmpp.example.com.key:ro&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">volumes&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">prosody-data&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Two ports exposed: 5222 for clients, 5269 for federation. The data volume holds user accounts and message archives. Config and certs are mounted read-only.&lt;/p>
&lt;h2 id="prosody-configuration">Prosody configuration&lt;/h2>
&lt;p>This is the core of it. I&amp;rsquo;ll walk through the key sections rather than dumping the whole file.&lt;/p>
&lt;h3 id="modules">Modules&lt;/h3>
&lt;p>Prosody is modular. My module list:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-lua" data-lang="lua">modules_enabled &lt;span style="color:#666">=&lt;/span> {
&lt;span style="color:#60a0b0;font-style:italic">-- Core&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;roster&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;saslauth&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;tls&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;dialback&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;disco&amp;#34;&lt;/span>;
&lt;span style="color:#4070a0">&amp;#34;posix&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;ping&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;register&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;time&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;uptime&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;version&amp;#34;&lt;/span>;
&lt;span style="color:#60a0b0;font-style:italic">-- Security&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;blocklist&amp;#34;&lt;/span>;
&lt;span style="color:#60a0b0;font-style:italic">-- Multi-device &amp;amp; mobile&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;carbons&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;csi_simple&amp;#34;&lt;/span>;
&lt;span style="color:#4070a0">&amp;#34;smacks&amp;#34;&lt;/span>; &lt;span style="color:#60a0b0;font-style:italic">-- Stream Management (reliable delivery)&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;cloud_notify&amp;#34;&lt;/span>; &lt;span style="color:#60a0b0;font-style:italic">-- Push notifications for mobile&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic">-- Message archive&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;mam&amp;#34;&lt;/span>;
&lt;span style="color:#60a0b0;font-style:italic">-- User profiles &amp;amp; presence&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;vcard_legacy&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;pep&amp;#34;&lt;/span>; &lt;span style="color:#4070a0">&amp;#34;bookmarks&amp;#34;&lt;/span>;
&lt;span style="color:#60a0b0;font-style:italic">-- Admin&lt;/span>
&lt;span style="color:#4070a0">&amp;#34;admin_shell&amp;#34;&lt;/span>;
}
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The ones I found matter most for a good mobile experience: &lt;code>carbons&lt;/code> syncs messages across all your devices instead of delivering to whichever one happened to be online. &lt;code>smacks&lt;/code> (Stream Management) handles flaky connections gracefully, so messages aren&amp;rsquo;t lost when your phone briefly drops signal. &lt;code>cloud_notify&lt;/code> enables push notifications so mobile clients don&amp;rsquo;t need a persistent connection, which is essential for battery life. And &lt;code>mam&lt;/code> (Message Archive Management) stores history server-side for search and cross-device sync.&lt;/p>
&lt;h3 id="security-settings">Security settings&lt;/h3>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-lua" data-lang="lua">c2s_require_encryption &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#007020;font-weight:bold">true&lt;/span>
s2s_require_encryption &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#007020;font-weight:bold">true&lt;/span>
s2s_secure_auth &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#007020;font-weight:bold">true&lt;/span>
authentication &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;internal_hashed&amp;#34;&lt;/span>
allow_registration &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#007020;font-weight:bold">false&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>All connections are encrypted and registration is disabled since I create accounts manually with &lt;code>prosodyctl&lt;/code>. I&amp;rsquo;ve enabled &lt;code>s2s_secure_auth&lt;/code>, which means Prosody will reject connections from servers with self-signed or misconfigured certificates. You&amp;rsquo;ll lose federation with some poorly configured servers, but if you&amp;rsquo;re self-hosting for privacy reasons it doesn&amp;rsquo;t make much sense to relax authentication for other people&amp;rsquo;s mistakes.&lt;/p>
&lt;h3 id="omemo-encryption">OMEMO encryption&lt;/h3>
&lt;p>TLS encrypts connections in transit, but the server itself can still read your messages. If you&amp;rsquo;re self-hosting, that means you&amp;rsquo;re trusting yourself, which is fine. But if other people use your server, or if you just want the belt-and-braces approach, OMEMO adds end-to-end encryption so that not even the server operator can read message content.&lt;/p>
&lt;p>OMEMO is built on the same encryption that Signal uses, so I&amp;rsquo;m comfortable trusting it. There&amp;rsquo;s nothing to configure on the server side either. OMEMO is handled entirely by the clients. Monal, Conversations, and Gajim all support it, and in most cases it&amp;rsquo;s enabled by default for new conversations. I&amp;rsquo;d recommend turning it on for everything and leaving it on.&lt;/p>
&lt;h3 id="message-archive">Message archive&lt;/h3>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-lua" data-lang="lua">archive_expires_after &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;1y&amp;#34;&lt;/span>
default_archive_policy &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#007020;font-weight:bold">true&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Messages are kept for a year and archiving is on by default. Clients can opt out per-conversation if they want.&lt;/p>
&lt;h3 id="http-for-file-uploads">HTTP for file uploads&lt;/h3>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-lua" data-lang="lua">http_interfaces &lt;span style="color:#666">=&lt;/span> { &lt;span style="color:#4070a0">&amp;#34;*&amp;#34;&lt;/span> }
http_ports &lt;span style="color:#666">=&lt;/span> { &lt;span style="color:#40a070">5280&lt;/span> }
https_ports &lt;span style="color:#666">=&lt;/span> { }
http_external_url &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;https://xmpp.example.com&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Prosody serves HTTP on port 5280 internally. I leave HTTPS to my reverse proxy (Caddy), which handles TLS termination. The &lt;code>http_external_url&lt;/code> tells Prosody what URL to hand clients when they upload files.&lt;/p>
&lt;h3 id="virtual-host-and-components">Virtual host and components&lt;/h3>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-lua" data-lang="lua">VirtualHost &lt;span style="color:#4070a0">&amp;#34;xmpp.example.com&amp;#34;&lt;/span>
ssl &lt;span style="color:#666">=&lt;/span> {
key &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;/etc/prosody/certs/xmpp.example.com.key&amp;#34;&lt;/span>;
certificate &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;/etc/prosody/certs/xmpp.example.com.crt&amp;#34;&lt;/span>;
}
Component &lt;span style="color:#4070a0">&amp;#34;conference.xmpp.example.com&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;muc&amp;#34;&lt;/span>
modules_enabled &lt;span style="color:#666">=&lt;/span> { &lt;span style="color:#4070a0">&amp;#34;muc_mam&amp;#34;&lt;/span> }
restrict_room_creation &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;local&amp;#34;&lt;/span>
Component &lt;span style="color:#4070a0">&amp;#34;upload.xmpp.example.com&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;http_file_share&amp;#34;&lt;/span>
http_file_share_size_limit &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#40a070">10485760&lt;/span> &lt;span style="color:#60a0b0;font-style:italic">-- 10 MB&lt;/span>
http_file_share_expires_after &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#40a070">2592000&lt;/span> &lt;span style="color:#60a0b0;font-style:italic">-- 30 days&lt;/span>
http_external_url &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;https://xmpp.example.com&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The MUC (Multi-User Chat) component gives you group chats with message history via &lt;code>muc_mam&lt;/code>. I restrict room creation to local users so random federated accounts can&amp;rsquo;t spin up rooms on my server.&lt;/p>
&lt;p>The file share component handles image and file uploads. A 10 MB limit and 30-day expiry keeps disk usage under control.&lt;/p>
&lt;h2 id="reverse-proxy-for-file-uploads">Reverse proxy for file uploads&lt;/h2>
&lt;p>Prosody&amp;rsquo;s HTTP port needs to be reachable from the internet for file uploads to work. I use Caddy:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">xmpp.example.com {
reverse_proxy xmpp:5280
}
&lt;/code>&lt;/pre>&lt;/div>&lt;p>When a client sends an image, Prosody hands it a URL like &lt;code>https://xmpp.example.com/upload/...&lt;/code> and the receiving client fetches it over HTTPS.&lt;/p>
&lt;h2 id="creating-accounts">Creating accounts&lt;/h2>
&lt;p>With registration disabled, accounts are created from the command line:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">docker &lt;span style="color:#007020">exec&lt;/span> -it xmpp prosodyctl adduser danny@xmpp.example.com
&lt;/code>&lt;/pre>&lt;/div>&lt;p>It prompts for a password. Done. Log in from any XMPP client.&lt;/p>
&lt;h2 id="firewall">Firewall&lt;/h2>
&lt;p>Open the XMPP ports:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">sudo ufw allow &lt;span style="color:#40a070">5222&lt;/span> comment &lt;span style="color:#4070a0">&amp;#39;XMPP client&amp;#39;&lt;/span>
sudo ufw allow &lt;span style="color:#40a070">5269&lt;/span> comment &lt;span style="color:#4070a0">&amp;#39;XMPP federation&amp;#39;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Port 80/443 for the reverse proxy if you haven&amp;rsquo;t already. If your server is behind a router, forward 5222 and 5269.&lt;/p>
&lt;h2 id="voice-and-video-calls">Voice and video calls&lt;/h2>
&lt;p>Text and file sharing work at this point. Voice and video calls need one more piece: a TURN/STUN server. Without it, clients behind NAT can&amp;rsquo;t establish direct media connections.&lt;/p>
&lt;p>I run &lt;a href="https://github.com/coturn/coturn">coturn&lt;/a> alongside Prosody. The two share a secret, and Prosody generates temporary credentials for clients automatically.&lt;/p>
&lt;p>Generate a shared secret:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">openssl rand -hex &lt;span style="color:#40a070">32&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The coturn &lt;code>docker-compose.yml&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="color:#007020;font-weight:bold">services&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">coturn&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">image&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>coturn/coturn:latest&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">container_name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>coturn&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">restart&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>unless-stopped&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">network_mode&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>host&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">volumes&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- ./turnserver.conf:/etc/coturn/turnserver.conf:ro&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">tmpfs&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- /var/lib/coturn&lt;span style="color:#bbb">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>It runs with &lt;code>network_mode: host&lt;/code> because TURN needs real network interfaces to handle NAT traversal. Docker&amp;rsquo;s port mapping breaks this.&lt;/p>
&lt;p>The &lt;code>turnserver.conf&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">listening-port=3478
tls-listening-port=5349
min-port=49152
max-port=49200
relay-threads=2
realm=xmpp.example.com
use-auth-secret
static-auth-secret=YOUR_SECRET_HERE
no-multicast-peers
no-cli
no-tlsv1
no-tlsv1_1
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
log-file=stdout
&lt;/code>&lt;/pre>&lt;/div>&lt;p>If your server is behind NAT, add:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">external-ip=YOUR_PUBLIC_IP/YOUR_PRIVATE_IP
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then tell Prosody about it. Add &lt;code>&amp;quot;turn_external&amp;quot;&lt;/code> to your modules, and inside the &lt;code>VirtualHost&lt;/code> block:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-lua" data-lang="lua"> turn_external_host &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;xmpp.example.com&amp;#34;&lt;/span>
turn_external_port &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#40a070">3478&lt;/span>
turn_external_secret &lt;span style="color:#666">=&lt;/span> &lt;span style="color:#4070a0">&amp;#34;YOUR_SECRET_HERE&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Open the firewall ports:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">sudo ufw allow &lt;span style="color:#40a070">3478&lt;/span> comment &lt;span style="color:#4070a0">&amp;#39;STUN/TURN&amp;#39;&lt;/span>
sudo ufw allow &lt;span style="color:#40a070">5349&lt;/span> comment &lt;span style="color:#4070a0">&amp;#39;TURNS&amp;#39;&lt;/span>
sudo ufw allow 49152:49200/udp comment &lt;span style="color:#4070a0">&amp;#39;TURN relay&amp;#39;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Verify with &lt;code>docker exec xmpp prosodyctl check turn&lt;/code>.&lt;/p>
&lt;h2 id="clients">Clients&lt;/h2>
&lt;p>On iOS I went with &lt;a href="https://monal-im.org/">Monal&lt;/a>, which is open source and supports all the modern XEPs. Push notifications work well. On Android, &lt;a href="https://conversations.im/">Conversations&lt;/a> seems to be the go-to. On desktop, &lt;a href="https://gajim.org/">Gajim&lt;/a> covers Linux and Windows, and Monal has a macOS build.&lt;/p>
&lt;p>All of them support OMEMO encryption, file sharing, group chats, and voice/video calls.&lt;/p>
&lt;h2 id="verifying-your-setup">Verifying your setup&lt;/h2>
&lt;p>Prosody has solid built-in diagnostics:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">docker &lt;span style="color:#007020">exec&lt;/span> xmpp prosodyctl check
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This checks DNS records, TLS certificates, connectivity, and module configuration. Fix anything it flags. The error messages are genuinely helpful.&lt;/p>
&lt;p>The &lt;a href="https://compliance.conversations.im/">XMPP Compliance Tester&lt;/a> is worth running too. Mine scored above 90% after getting the config right.&lt;/p>
&lt;h2 id="final-thoughts">Final thoughts&lt;/h2>
&lt;p>The whole setup runs in two small Docker containers and a reverse proxy entry. Prosody, file uploads, message archive, push notifications, group chats, voice calls.&lt;/p>
&lt;p>I still use Signal for most day-to-day conversations and I&amp;rsquo;m not planning to stop. But having my own XMPP server means I&amp;rsquo;m not entirely dependent on any single service. I can message anyone on any XMPP server, not just people who signed up to the same one. It&amp;rsquo;s a nice fallback to have.&lt;/p>
&lt;p>If you&amp;rsquo;re already running Docker on a server somewhere, it&amp;rsquo;s a good weekend project.&lt;/p></description></item><item><title>What Your Bluetooth Devices Reveal About You</title><link>https://blog.dmcc.io/journal/2026-bluetooth-privacy-bluehood/</link><pubDate>Sun, 18 Jan 2026 16:00:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/2026-bluetooth-privacy-bluehood/#2026-01-18</guid><description>&lt;p>If you&amp;rsquo;ve read much of this blog, you&amp;rsquo;ll know I have a &lt;a href="https://blog.dmcc.io/privacy">thing for privacy&lt;/a>. Whether it&amp;rsquo;s &lt;a href="https://blog.dmcc.io/journal/tor-relay-onion-location/">running my blog over Tor&lt;/a>, &lt;a href="https://blog.dmcc.io/journal/tailscale-adguard-dns/">blocking ads network-wide with AdGuard&lt;/a>, or &lt;a href="https://blog.dmcc.io/journal/proton-pass-cli-linux-secrets/">keeping secrets out of my dotfiles with Proton Pass&lt;/a>, I tend to think carefully about what data I&amp;rsquo;m exposing and to whom.&lt;/p>
&lt;p>Last weekend I built &lt;a href="https://github.com/dannymcc/bluehood">Bluehood&lt;/a>, a Bluetooth scanner that tracks nearby devices and analyses their presence patterns. The project was heavily assisted by AI, but the motivation was entirely human: I wanted to understand what information I was leaking just by having Bluetooth enabled.&lt;/p>
&lt;p>The timing felt right. A few days ago, researchers at KU Leuven disclosed &lt;a href="https://whisperpair.eu/">WhisperPair&lt;/a> (CVE-2025-36911), a critical vulnerability affecting hundreds of millions of Bluetooth audio devices. The flaw allows attackers to hijack headphones and earbuds remotely, eavesdrop on conversations, and track locations through Google&amp;rsquo;s Find Hub network. It&amp;rsquo;s a stark reminder that Bluetooth isn&amp;rsquo;t the invisible, harmless signal we treat it as.&lt;/p>
&lt;h2 id="the-problem-nobody-talks-about">The Problem Nobody Talks About&lt;/h2>
&lt;p>We&amp;rsquo;ve normalised the idea that Bluetooth is always on. Phones, laptops, smartwatches, headphones, cars, and even medical devices constantly broadcast their presence. The standard response to privacy concerns is usually &amp;ldquo;nothing to hide, nothing to fear.&amp;rdquo;&lt;/p>
&lt;p>But here&amp;rsquo;s the thing: even if you have nothing to hide, you&amp;rsquo;re still giving away information you probably don&amp;rsquo;t intend to.&lt;/p>
&lt;p>From my home office, running Bluehood in passive mode (just listening, never connecting), I could detect:&lt;/p>
&lt;ul>
&lt;li>When delivery vehicles arrived, and whether it was the same driver each time&lt;/li>
&lt;li>The daily patterns of my neighbours based on their phones and wearables&lt;/li>
&lt;li>Which devices consistently appeared together (someone&amp;rsquo;s phone and smartwatch, for instance)&lt;/li>
&lt;li>The exact times certain people were home, at work, or elsewhere&lt;/li>
&lt;/ul>
&lt;p>None of this required any special equipment. A Raspberry Pi with a Bluetooth adapter would do the job. So would most laptops.&lt;/p>
&lt;h2 id="devices-you-cant-control">Devices You Can&amp;rsquo;t Control&lt;/h2>
&lt;p>What concerns me most isn&amp;rsquo;t that people choose to have Bluetooth enabled. It&amp;rsquo;s that many devices don&amp;rsquo;t give users the option to disable it.&lt;/p>
&lt;p>Hearing aids are a good example. Modern hearing aids often use Bluetooth Low Energy so audiologists can connect and adjust settings or run diagnostics. Pacemakers and other implanted medical devices sometimes broadcast BLE signals for the same reason. The user can&amp;rsquo;t simply turn this off.&lt;/p>
&lt;p>Then there are vehicles. Delivery vans, police cars, ambulances, logistics fleets, and trains often have Bluetooth-enabled systems for fleet management, diagnostics, or driver assistance. These broadcast continuously, and the drivers have no control over it.&lt;/p>
&lt;p>Even consumer devices aren&amp;rsquo;t always straightforward. Many smartwatches need Bluetooth to function at all. GPS collars for pets require it to communicate with the owner&amp;rsquo;s phone. Some fitness equipment won&amp;rsquo;t work without it.&lt;/p>
&lt;h2 id="privacy-tools-that-need-you-to-broadcast">Privacy Tools That Need You to Broadcast&lt;/h2>
&lt;p>What&amp;rsquo;s interesting is that some of the most privacy-focused projects actually require Bluetooth to be enabled.&lt;/p>
&lt;p>&lt;a href="https://briarproject.org/">Briar&lt;/a> is a peer-to-peer messaging app designed for activists and journalists operating in hostile environments. It doesn&amp;rsquo;t rely on central servers, and when the internet goes down, it can sync messages via Bluetooth or Wi-Fi mesh networks. It&amp;rsquo;s a genuinely useful tool for maintaining communications during internet blackouts or in areas with heavy surveillance.&lt;/p>
&lt;p>&lt;a href="https://bitchat.free/">BitChat&lt;/a> takes this even further. It&amp;rsquo;s a decentralised messaging app that operates entirely over Bluetooth mesh networks—no internet required, no servers, no phone numbers. Each device acts as both client and relay, automatically discovering peers and bouncing messages across multiple hops to extend the network&amp;rsquo;s reach. The project explicitly targets scenarios like protests, natural disasters, and regions with limited or censored connectivity.&lt;/p>
&lt;p>Both are genuinely excellent projects solving real problems. But to use them, you need Bluetooth enabled. And every device with Bluetooth enabled is broadcasting its presence to anyone nearby who cares to listen.&lt;/p>
&lt;p>This creates a strange tension. Tools designed to protect privacy often require a feature that compromises privacy in other ways.&lt;/p>
&lt;h2 id="what-metadata-reveals">What Metadata Reveals&lt;/h2>
&lt;p>People often underestimate what patterns reveal. A bad actor with a Bluetooth scanner doesn&amp;rsquo;t need to know your name. They just need to observe behaviour over time.&lt;/p>
&lt;p>Consider what someone could learn by monitoring Bluetooth signals in a residential area for a few weeks:&lt;/p>
&lt;ul>
&lt;li>When is the house typically empty?&lt;/li>
&lt;li>Does someone visit every Thursday afternoon?&lt;/li>
&lt;li>Is there a regular pattern that suggests shift work?&lt;/li>
&lt;li>When do the children come home from school?&lt;/li>
&lt;li>Which homes have the same delivery driver, suggesting similar shopping habits?&lt;/li>
&lt;/ul>
&lt;p>If there&amp;rsquo;s damage to your property, you could potentially go back through the logs and see which devices were in range at that time. A smartwatch on a dog-walker passing by. A phone in someone&amp;rsquo;s pocket. A vehicle with fleet tracking.&lt;/p>
&lt;p>These might seem like edge cases, but they illustrate a broader point: we&amp;rsquo;re constantly leaving digital breadcrumbs we don&amp;rsquo;t even think about.&lt;/p>
&lt;h2 id="what-bluehood-actually-does">What Bluehood Actually Does&lt;/h2>
&lt;p>Bluehood is a Python application that runs on anything with a Bluetooth adapter. It continuously scans for nearby devices, identifies them by vendor and BLE service UUIDs, and tracks when they appear and disappear.&lt;/p>
&lt;p>The key features:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Passive scanning&lt;/strong>: It only listens. It doesn&amp;rsquo;t try to connect or interact with any device.&lt;/li>
&lt;li>&lt;strong>Device classification&lt;/strong>: Phones, audio devices, wearables, vehicles, IoT devices, and more, identified by BLE fingerprints.&lt;/li>
&lt;li>&lt;strong>Pattern analysis&lt;/strong>: Hourly and daily heatmaps, dwell time tracking, and detection of correlated devices.&lt;/li>
&lt;li>&lt;strong>Filtering&lt;/strong>: Randomised MAC addresses (used by modern phones for privacy) are detected and hidden from the main view.&lt;/li>
&lt;li>&lt;strong>Web dashboard&lt;/strong>: A simple interface for monitoring and analysis.&lt;/li>
&lt;/ul>
&lt;p>You can run it in Docker or install it directly. It stores data in SQLite and optionally sends push notifications via &lt;a href="https://ntfy.sh">ntfy.sh&lt;/a> when watched devices arrive or leave.&lt;/p>
&lt;h2 id="running-it">Running It&lt;/h2>
&lt;p>The simplest way to try Bluehood is with Docker:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">git clone https://github.com/dannymcc/bluehood.git
&lt;span style="color:#007020">cd&lt;/span> bluehood
docker compose up -d
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The dashboard is available at &lt;code>http://localhost:8080&lt;/code>.&lt;/p>
&lt;p>If you prefer a manual install:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">sudo pacman -S bluez bluez-utils python-pip &lt;span style="color:#60a0b0;font-style:italic"># Arch&lt;/span>
sudo apt install bluez python3-pip &lt;span style="color:#60a0b0;font-style:italic"># Debian/Ubuntu&lt;/span>
pip install -e .
sudo bluehood
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Bluetooth scanning needs elevated privileges. You can either run as root, grant capabilities to Python, or use the included systemd service for always-on monitoring.&lt;/p>
&lt;h2 id="the-point-of-all-this">The Point of All This&lt;/h2>
&lt;p>Bluehood isn&amp;rsquo;t a hacking tool. It&amp;rsquo;s an educational demonstration of what&amp;rsquo;s possible with commodity hardware and a bit of patience.&lt;/p>
&lt;p>I built it because I wanted to see for myself what I was broadcasting. The results were sobering. Even with no malicious intent, anyone with basic technical knowledge could learn a lot about my household just by sitting in their car and running a script.&lt;/p>
&lt;p>This isn&amp;rsquo;t about paranoia. It&amp;rsquo;s about understanding the trade-offs we make when we leave wireless radios enabled on our devices. For some use cases, Bluetooth is essential. For others, it&amp;rsquo;s just convenience. Being aware of what you&amp;rsquo;re exposing is the first step to making informed decisions about which category your devices fall into.&lt;/p>
&lt;p>If you try Bluehood and it makes you think twice about your own Bluetooth habits, it&amp;rsquo;s done its job.&lt;/p>
&lt;hr>
&lt;p>The source code is available on &lt;a href="https://github.com/dannymcc/bluehood">GitHub&lt;/a>. Feedback and contributions welcome.&lt;/p></description></item><item><title>Network-Wide Ad Blocking with Tailscale and AdGuard Home</title><link>https://blog.dmcc.io/journal/tailscale-adguard-dns/</link><pubDate>Sat, 10 Jan 2026 00:00:10 +0000</pubDate><guid>https://blog.dmcc.io/journal/tailscale-adguard-dns/#2026-01-05</guid><description>&lt;p>One of the frustrations with traditional network-wide ad blocking is that it only works when you&amp;rsquo;re at home. The moment you leave your network, you&amp;rsquo;re back to seeing ads and trackers on every device. But if you&amp;rsquo;re already running Tailscale, there&amp;rsquo;s a simple fix: run AdGuard Home on a device in your tailnet and point all your devices at it.&lt;/p>
&lt;p>The result? Every device on your Tailscale network gets full ad blocking and secure DNS resolution, whether you&amp;rsquo;re at home, in a coffee shop, or on the other side of the world.&lt;/p>
&lt;h2 id="why-this-setup">Why This Setup?&lt;/h2>
&lt;p>I&amp;rsquo;ve been &lt;a href="https://blog.dmcc.io/journal/2025_my_privacy_reboot/">taking digital privacy more seriously&lt;/a> in recent years. I prefer &lt;a href="https://blog.dmcc.io/contact/">encrypted email via PGP&lt;/a>, block ads and trackers wherever possible, and generally try to &lt;a href="https://blog.dmcc.io/privacy/">minimise the data I leak&lt;/a> online.&lt;/p>
&lt;p>I&amp;rsquo;ve been running Pi-hole for years, but it always felt like a half-measure. It worked great at home, but my phone and laptop were unprotected the moment I stepped outside. I could have set up a VPN back to my home network, but that felt clunky.&lt;/p>
&lt;p>With Tailscale, the solution is elegant. Every device is already connected to my tailnet, so all I need is a DNS server that&amp;rsquo;s accessible from anywhere on that network. AdGuard Home fits the bill perfectly. It&amp;rsquo;s lighter than Pi-hole, has a cleaner interface, and supports DNS-over-HTTPS out of the box for upstream queries.&lt;/p>
&lt;p>The other benefit is that this setup preserves Tailscale&amp;rsquo;s Magic DNS. I can still access my tailnet devices by name (like &lt;code>server.tail1234.ts.net&lt;/code>), while all other DNS queries go through AdGuard for secure resolution and ad blocking.&lt;/p>
&lt;h2 id="what-youll-need">What You&amp;rsquo;ll Need&lt;/h2>
&lt;ul>
&lt;li>A device on your Tailscale network that&amp;rsquo;s always on (a small home server, Raspberry Pi, or even an old laptop)&lt;/li>
&lt;li>AdGuard Home installed on that device&lt;/li>
&lt;li>Access to your Tailscale admin console&lt;/li>
&lt;/ul>
&lt;h2 id="installing-adguard-home">Installing AdGuard Home&lt;/h2>
&lt;p>SSH into your always-on device and run the official installer:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">curl -s -S -L https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh | sudo bash
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This installs AdGuard Home to &lt;code>/opt/AdGuardHome&lt;/code> and sets it up as a systemd service.&lt;/p>
&lt;p>Once installed, open the setup wizard in your browser at &lt;code>http://&amp;lt;tailscale-ip&amp;gt;:3000&lt;/code>. During setup:&lt;/p>
&lt;ol>
&lt;li>Set the &lt;strong>DNS listen address&lt;/strong> to your device&amp;rsquo;s Tailscale IP (e.g., &lt;code>100.x.x.x&lt;/code>)&lt;/li>
&lt;li>Set the &lt;strong>admin interface&lt;/strong> to the same Tailscale IP on port 3000&lt;/li>
&lt;li>Create an admin username and password&lt;/li>
&lt;/ol>
&lt;p>The key here is binding to your Tailscale IP rather than &lt;code>0.0.0.0&lt;/code>. This ensures AdGuard only listens on your tailnet, not on your local network or the public internet.&lt;/p>
&lt;h2 id="configuring-secure-upstream-dns">Configuring Secure Upstream DNS&lt;/h2>
&lt;p>By default, AdGuard will use your system&amp;rsquo;s DNS servers for upstream queries. That&amp;rsquo;s not ideal. We want encrypted DNS all the way through.&lt;/p>
&lt;p>In AdGuard Home, go to &lt;strong>Settings → DNS settings → Upstream DNS servers&lt;/strong> and replace the defaults with:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">https://dns.quad9.net/dns-query
https://dns11.quad9.net/dns-query
tls://dns.quad9.net
&lt;/code>&lt;/pre>&lt;/div>&lt;p>These are Quad9&amp;rsquo;s DNS-over-HTTPS and DNS-over-TLS endpoints. Quad9 is a privacy-focused resolver that also blocks known malicious domains.&lt;/p>
&lt;p>For the &lt;strong>Bootstrap DNS servers&lt;/strong> (used to resolve the upstream hostnames), add:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">9.9.9.9
149.112.112.112
&lt;/code>&lt;/pre>&lt;/div>&lt;p>I&amp;rsquo;d also recommend enabling &lt;strong>DNSSEC&lt;/strong> validation and &lt;strong>Optimistic caching&lt;/strong> in the same settings page for better security and performance.&lt;/p>
&lt;h2 id="pointing-tailscale-at-your-dns-server">Pointing Tailscale at Your DNS Server&lt;/h2>
&lt;p>Now the easy part. Open your &lt;a href="https://login.tailscale.com/admin/dns">Tailscale admin console&lt;/a> and:&lt;/p>
&lt;ol>
&lt;li>Add your device&amp;rsquo;s Tailscale IP as a &lt;strong>Global nameserver&lt;/strong>&lt;/li>
&lt;li>Enable &lt;strong>Override local DNS&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>That&amp;rsquo;s it. Every device on your tailnet will now use your AdGuard instance for DNS resolution.&lt;/p>
&lt;h2 id="the-benefits">The Benefits&lt;/h2>
&lt;p>This setup gives you:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Ad and tracker blocking everywhere&lt;/strong>, not just at home&lt;/li>
&lt;li>&lt;strong>Encrypted DNS queries&lt;/strong>, so your ISP can&amp;rsquo;t see what domains you&amp;rsquo;re resolving&lt;/li>
&lt;li>&lt;strong>Malware protection&lt;/strong> via Quad9, which blocks known malicious domains at the DNS level&lt;/li>
&lt;li>&lt;strong>A single dashboard&lt;/strong> to view query logs and statistics for all your devices in one place&lt;/li>
&lt;li>&lt;strong>No client configuration&lt;/strong> since Tailscale pushes the DNS settings automatically&lt;/li>
&lt;/ul>
&lt;p>If you do keep logging enabled, the query logs can be useful for identifying apps that are phoning home or misbehaving. But there&amp;rsquo;s a trade-off here.&lt;/p>
&lt;h2 id="a-note-on-logging">A Note on Logging&lt;/h2>
&lt;p>By default, AdGuard Home logs every DNS query from every device. That&amp;rsquo;s useful for debugging, but it felt uncomfortable to me. The majority of my family use my tailnet, and I have no interest in knowing what sites they&amp;rsquo;re visiting. I also don&amp;rsquo;t need my own traffic logged if it isn&amp;rsquo;t necessary.&lt;/p>
&lt;p>I&amp;rsquo;ve turned off query logging entirely in &lt;strong>Settings &amp;gt; General settings &amp;gt; Query log configuration&lt;/strong>, and disabled statistics as well. Ad blocking still works without any of this data being stored.&lt;/p>
&lt;h2 id="a-note-on-reliability">A Note on Reliability&lt;/h2>
&lt;p>Since all your devices depend on this DNS server, you&amp;rsquo;ll want to make sure it&amp;rsquo;s reliable. If the device running AdGuard goes offline, DNS resolution will fail for your entire tailnet.&lt;/p>
&lt;p>A few options to mitigate this:&lt;/p>
&lt;ol>
&lt;li>Run AdGuard on a device that&amp;rsquo;s always on (a dedicated home server or cloud VPS)&lt;/li>
&lt;li>Add a fallback DNS server in Tailscale (though this bypasses AdGuard when your server is down)&lt;/li>
&lt;li>Run a second AdGuard instance on another device and add both as nameservers&lt;/li>
&lt;/ol>
&lt;p>For my setup, I&amp;rsquo;m running it on a small Intel NUC that&amp;rsquo;s always on anyway. It&amp;rsquo;s been rock solid so far.&lt;/p>
&lt;h2 id="wrapping-up">Wrapping Up&lt;/h2>
&lt;p>This is one of those setups that takes ten minutes and then quietly improves your life. Every device on my tailnet now gets ad blocking and secure DNS without any per-device configuration. The combination of Tailscale&amp;rsquo;s networking and AdGuard&amp;rsquo;s filtering is genuinely elegant.&lt;/p>
&lt;p>If you&amp;rsquo;re already running Tailscale, this is worth the effort.&lt;/p></description></item><item><title>Making My Blog Available on Tor</title><link>https://blog.dmcc.io/journal/tor-relay-onion-location/</link><pubDate>Tue, 06 Jan 2026 00:00:10 +0000</pubDate><guid>https://blog.dmcc.io/journal/tor-relay-onion-location/#2026-01-06</guid><description>&lt;p>I wanted to make this blog available as a Tor hidden service. Not because I expect many visitors via Tor, but because it felt like a small contribution to a more private web. If someone wants to read my posts without revealing their IP address, they should be able to.&lt;/p>
&lt;p>The setup has two parts: a Docker container running Tor and Nginx on my home server, and an HTTP header on the regular site that advertises the onion address.&lt;/p>
&lt;h2 id="the-docker-setup">The Docker setup&lt;/h2>
&lt;p>My blog is hosted on Netlify, so the hidden service works as a proxy. Tor users connect to the onion address, and the container fetches the content from the public site on their behalf.&lt;/p>
&lt;p>Here&amp;rsquo;s the &lt;code>docker-compose.yml&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="color:#007020;font-weight:bold">services&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">tor-dmcc-sites&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">build&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>.&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">container_name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>tor-dmcc-sites&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">restart&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>unless-stopped&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">volumes&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- ./hidden_service_blog:/var/lib/tor/hidden_service_blog&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- ./hidden_service_website:/var/lib/tor/hidden_service_website&lt;span style="color:#bbb">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The volumes persist the hidden service keys. Once Tor generates your &lt;code>.onion&lt;/code> address, you want to keep those keys. Otherwise you&amp;rsquo;ll get a new address every time the container restarts.&lt;/p>
&lt;p>The &lt;code>Dockerfile&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-dockerfile" data-lang="dockerfile">&lt;span style="color:#007020;font-weight:bold">FROM&lt;/span>&lt;span style="color:#4070a0"> debian:bookworm-slim&lt;/span>&lt;span style="">
&lt;/span>&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">RUN&lt;/span> apt-get update &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> apt-get install -y tor nginx curl &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> apt-get clean &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> rm -rf /var/lib/apt/lists/*&lt;span style="">
&lt;/span>&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">RUN&lt;/span> mkdir -p /var/lib/tor/hidden_service_blog &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> mkdir -p /var/lib/tor/hidden_service_website &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> chown -R debian-tor:debian-tor /var/lib/tor &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> chmod &lt;span style="color:#40a070">700&lt;/span> /var/lib/tor/hidden_service_blog &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span> chmod &lt;span style="color:#40a070">700&lt;/span> /var/lib/tor/hidden_service_website&lt;span style="">
&lt;/span>&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">COPY&lt;/span> torrc /etc/tor/torrc&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">COPY&lt;/span> nginx.conf /etc/nginx/nginx.conf&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">COPY&lt;/span> entrypoint.sh /entrypoint.sh&lt;span style="">
&lt;/span>&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">RUN&lt;/span> chmod +x /entrypoint.sh&lt;span style="">
&lt;/span>&lt;span style="">
&lt;/span>&lt;span style="">&lt;/span>&lt;span style="color:#007020;font-weight:bold">CMD&lt;/span> [&lt;span style="color:#4070a0">&amp;#34;/entrypoint.sh&amp;#34;&lt;/span>]&lt;span style="">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The &lt;code>torrc&lt;/code> configuration defines the hidden services:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#60a0b0;font-style:italic"># Disable SOCKS (we only need hidden services)&lt;/span>
SocksPort &lt;span style="color:#40a070">0&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Log to stdout for docker logs&lt;/span>
Log notice stdout
&lt;span style="color:#60a0b0;font-style:italic"># Hidden service for blog.dmcc.io&lt;/span>
HiddenServiceDir /var/lib/tor/hidden_service_blog/
HiddenServicePort &lt;span style="color:#40a070">80&lt;/span> 127.0.0.1:8081
&lt;span style="color:#60a0b0;font-style:italic"># Hidden service for dmcc.io (main website)&lt;/span>
HiddenServiceDir /var/lib/tor/hidden_service_website/
HiddenServicePort &lt;span style="color:#40a070">80&lt;/span> 127.0.0.1:8082
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Each hidden service points to a local port where Nginx is listening.&lt;/p>
&lt;p>The &lt;code>nginx.conf&lt;/code> proxies requests to the public sites:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-nginx" data-lang="nginx">&lt;span style="color:#007020;font-weight:bold">user&lt;/span> &lt;span style="color:#4070a0">www-data&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">worker_processes&lt;/span> &lt;span style="color:#4070a0">auto&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">pid&lt;/span> &lt;span style="color:#4070a0">/run/nginx.pid&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">error_log&lt;/span> &lt;span style="color:#4070a0">/var/log/nginx/error.log&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">events&lt;/span> {
&lt;span style="color:#007020;font-weight:bold">worker_connections&lt;/span> &lt;span style="color:#40a070">768&lt;/span>;
}
&lt;span style="color:#007020;font-weight:bold">http&lt;/span> {
&lt;span style="color:#007020;font-weight:bold">sendfile&lt;/span> &lt;span style="color:#60add5">on&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">tcp_nopush&lt;/span> &lt;span style="color:#60add5">on&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">include&lt;/span> &lt;span style="color:#4070a0">/etc/nginx/mime.types&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">default_type&lt;/span> &lt;span style="color:#4070a0">application/octet-stream&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">resolver&lt;/span> &lt;span style="color:#40a070">1&lt;/span>&lt;span style="color:#4070a0">.1.1.1&lt;/span> &lt;span style="color:#40a070">8&lt;/span>&lt;span style="color:#4070a0">.8.8.8&lt;/span> &lt;span style="color:#4070a0">valid=300s&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">resolver_timeout&lt;/span> &lt;span style="color:#4070a0">5s&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">server&lt;/span> {
&lt;span style="color:#007020;font-weight:bold">listen&lt;/span> &lt;span style="color:#40a070">8081&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">location&lt;/span> &lt;span style="color:#4070a0">/&lt;/span> {
&lt;span style="color:#007020;font-weight:bold">proxy_pass&lt;/span> &lt;span style="color:#4070a0">https://blog.dmcc.io&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">proxy_set_header&lt;/span> &lt;span style="color:#4070a0">Host&lt;/span> &lt;span style="color:#4070a0">blog.dmcc.io&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">proxy_ssl_server_name&lt;/span> &lt;span style="color:#60add5">on&lt;/span>;
}
}
&lt;span style="color:#007020;font-weight:bold">server&lt;/span> {
&lt;span style="color:#007020;font-weight:bold">listen&lt;/span> &lt;span style="color:#40a070">8082&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">location&lt;/span> &lt;span style="color:#4070a0">/&lt;/span> {
&lt;span style="color:#007020;font-weight:bold">proxy_pass&lt;/span> &lt;span style="color:#4070a0">https://dmcc.io&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">proxy_set_header&lt;/span> &lt;span style="color:#4070a0">Host&lt;/span> &lt;span style="color:#4070a0">dmcc.io&lt;/span>;
&lt;span style="color:#007020;font-weight:bold">proxy_ssl_server_name&lt;/span> &lt;span style="color:#60add5">on&lt;/span>;
}
}
}
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Finally, the &lt;code>entrypoint.sh&lt;/code> starts both services:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#007020">#!/bin/bash
&lt;/span>&lt;span style="color:#007020">&lt;/span>&lt;span style="color:#007020">set&lt;/span> -e
chown -R debian-tor:debian-tor /var/lib/tor/hidden_service_blog
chown -R debian-tor:debian-tor /var/lib/tor/hidden_service_website
chmod &lt;span style="color:#40a070">700&lt;/span> /var/lib/tor/hidden_service_blog
chmod &lt;span style="color:#40a070">700&lt;/span> /var/lib/tor/hidden_service_website
nginx
&lt;span style="color:#007020">exec&lt;/span> su -s /bin/sh debian-tor -c &lt;span style="color:#4070a0">&amp;#34;tor -f /etc/tor/torrc&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>After running &lt;code>docker compose up -d&lt;/code>, the onion address is generated in the hidden service directory:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">cat hidden_service_blog/hostname
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This gives you something like &lt;code>yxtxzre2jg3zhzlnqxrifaqbkuyd3e3sdgbjqf3tisrtypsbyoclrzqd.onion&lt;/code>.&lt;/p>
&lt;h2 id="the-onion-location-header">The Onion-Location header&lt;/h2>
&lt;p>Once the hidden service is running, the next step is telling Tor Browser users it exists. The &lt;code>Onion-Location&lt;/code> HTTP header does exactly this. When Tor Browser sees it, a &lt;code>.onion available&lt;/code> button appears in the address bar.&lt;/p>
&lt;p>Since my blog is hosted on Netlify, I added a &lt;code>static/_headers&lt;/code> file:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">/*
Onion-Location: http://yxtxzre2jg3zhzlnqxrifaqbkuyd3e3sdgbjqf3tisrtypsbyoclrzqd.onion
&lt;/code>&lt;/pre>&lt;/div>&lt;p>For other setups:&lt;/p>
&lt;p>&lt;strong>Nginx:&lt;/strong>&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-nginx" data-lang="nginx">&lt;span style="color:#007020;font-weight:bold">add_header&lt;/span> &lt;span style="color:#4070a0">Onion-Location&lt;/span> &lt;span style="color:#4070a0">http://your-onion-address.onion&lt;/span>&lt;span style="color:#bb60d5">$request_uri&lt;/span>;
&lt;/code>&lt;/pre>&lt;/div>&lt;p>&lt;strong>Apache:&lt;/strong>&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-apache" data-lang="apache">&lt;span style="color:#007020">Header&lt;/span> set Onion-Location &lt;span style="color:#4070a0">&amp;#34;http://your-onion-address.onion%{REQUEST_URI}s&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>&lt;strong>Caddy:&lt;/strong>&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">header Onion-Location &amp;#34;http://your-onion-address.onion{uri}&amp;#34;
&lt;/code>&lt;/pre>&lt;/div>&lt;p>&lt;strong>HTML meta tag&lt;/strong> (if you can&amp;rsquo;t set headers):&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-html" data-lang="html">&amp;lt;&lt;span style="color:#062873;font-weight:bold">meta&lt;/span> &lt;span style="color:#4070a0">http-equiv&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;onion-location&amp;#34;&lt;/span> &lt;span style="color:#4070a0">content&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;http://your-onion-address.onion&amp;#34;&lt;/span> /&amp;gt;
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="testing-it">Testing it&lt;/h2>
&lt;p>Open Tor Browser, visit any page of my blog, and look for the purple &lt;code>.onion available&lt;/code> button in the address bar. Click it to switch to the hidden service.&lt;/p>
&lt;p>You can also verify the header is set:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">curl -I https://blog.dmcc.io | grep -i onion-location
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="final-thoughts">Final thoughts&lt;/h2>
&lt;p>The whole setup took maybe half an hour. The Docker container runs quietly on my home server, and visitors using Tor Browser get a prompt that an onion version exists. It&amp;rsquo;s a small thing, but it means anyone who wants to read this blog privately can do so without trusting their connection to a third party.&lt;/p></description></item><item><title>Using Proton Pass CLI to Keep Linux Scripts Secure</title><link>https://blog.dmcc.io/journal/proton-pass-cli-linux-secrets/</link><pubDate>Sun, 04 Jan 2026 00:01:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/proton-pass-cli-linux-secrets/#2026-01-04</guid><description>&lt;p>If you manage dotfiles in a public Git repository, you&amp;rsquo;ve probably faced the dilemma of how to handle secrets. API keys, passwords, and tokens need to live somewhere, but committing them to version control is a security risk.&lt;/p>
&lt;p>Proton has recently released a CLI tool for Proton Pass that solves this elegantly. Instead of storing secrets in files, you fetch them at runtime from your encrypted Proton Pass vault.&lt;/p>
&lt;h3 id="installation">Installation&lt;/h3>
&lt;p>The CLI is currently in beta. Install it with:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">curl -fsSL https://proton.me/download/pass-cli/install.sh | bash
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This installs &lt;code>pass-cli&lt;/code> to &lt;code>~/.local/bin/&lt;/code>. Then authenticate:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli login
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This opens a browser for Proton authentication. Once complete, you&amp;rsquo;re ready to use the CLI.&lt;/p>
&lt;h3 id="basic-usage">Basic Usage&lt;/h3>
&lt;p>List your vaults:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli vault list
&lt;/code>&lt;/pre>&lt;/div>&lt;p>View an item:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli item view --vault-name &lt;span style="color:#4070a0">&amp;#34;Personal&amp;#34;&lt;/span> --item-title &lt;span style="color:#4070a0">&amp;#34;My API Key&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Fetch a specific field:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli item view --vault-name &lt;span style="color:#4070a0">&amp;#34;Personal&amp;#34;&lt;/span> --item-title &lt;span style="color:#4070a0">&amp;#34;My API Key&amp;#34;&lt;/span> --field &lt;span style="color:#4070a0">&amp;#34;API Token&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Get JSON output (useful for parsing multiple fields):&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli item view --vault-name &lt;span style="color:#4070a0">&amp;#34;Personal&amp;#34;&lt;/span> --item-title &lt;span style="color:#4070a0">&amp;#34;My API Key&amp;#34;&lt;/span> --output json
&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="real-world-example-wrapper-scripts">Real-World Example: Wrapper Scripts&lt;/h3>
&lt;p>I have several tools that need API credentials. Rather than storing these in config files, I created wrapper scripts that fetch credentials from Proton Pass at runtime.&lt;/p>
&lt;p>Here&amp;rsquo;s a wrapper for a TUI application that needs API credentials:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#007020">#!/bin/bash
&lt;/span>&lt;span style="color:#007020">&lt;/span>&lt;span style="color:#007020">set&lt;/span> -e
&lt;span style="color:#bb60d5">PASS_CLI&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$HOME&lt;/span>&lt;span style="color:#4070a0">/.local/bin/pass-cli&amp;#34;&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Fetch credentials from Proton Pass (JSON for single API call)&lt;/span>
&lt;span style="color:#bb60d5">CREDS_JSON&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$PASS_CLI&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> item view --vault-name &lt;span style="color:#4070a0">&amp;#34;Personal&amp;#34;&lt;/span> --item-title &lt;span style="color:#4070a0">&amp;#34;My API&amp;#34;&lt;/span> --output json&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#007020;font-weight:bold">if&lt;/span> &lt;span style="color:#666">[&lt;/span> -z &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$CREDS_JSON&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#666">]&lt;/span>; &lt;span style="color:#007020;font-weight:bold">then&lt;/span>
&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#4070a0">&amp;#34;Error: Failed to fetch credentials from Proton Pass&amp;#34;&lt;/span> &amp;gt;&amp;amp;&lt;span style="color:#40a070">2&lt;/span>
&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#4070a0">&amp;#34;Make sure you&amp;#39;re logged in: pass-cli login&amp;#34;&lt;/span> &amp;gt;&amp;amp;&lt;span style="color:#40a070">2&lt;/span>
&lt;span style="color:#007020">exit&lt;/span> &lt;span style="color:#40a070">1&lt;/span>
&lt;span style="color:#007020;font-weight:bold">fi&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Parse fields from JSON&lt;/span>
&lt;span style="color:#bb60d5">API_USER&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$CREDS_JSON&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> | jq -r &lt;span style="color:#4070a0">&amp;#39;.item.content.content.Custom.sections[0].section_fields[] | select(.name == &amp;#34;Username&amp;#34;) | .content.Text&amp;#39;&lt;/span>&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#bb60d5">API_TOKEN&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$CREDS_JSON&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> | jq -r &lt;span style="color:#4070a0">&amp;#39;.item.content.content.Custom.sections[0].section_fields[] | select(.name == &amp;#34;Token&amp;#34;) | .content.Hidden&amp;#39;&lt;/span>&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Export and run&lt;/span>
&lt;span style="color:#007020">export&lt;/span> API_USER API_TOKEN
&lt;span style="color:#007020">exec&lt;/span> my-app &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$@&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The key insight: fetching JSON once and parsing with &lt;code>jq&lt;/code> is faster than making separate API calls for each field.&lt;/p>
&lt;h3 id="adding-credential-caching">Adding Credential Caching&lt;/h3>
&lt;p>The Proton Pass API call takes a few seconds. For frequently-used tools, this adds noticeable latency. The solution is to cache credentials in the Linux kernel keyring:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#007020">#!/bin/bash
&lt;/span>&lt;span style="color:#007020">&lt;/span>&lt;span style="color:#007020">set&lt;/span> -e
&lt;span style="color:#bb60d5">PASS_CLI&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$HOME&lt;/span>&lt;span style="color:#4070a0">/.local/bin/pass-cli&amp;#34;&lt;/span>
&lt;span style="color:#bb60d5">CACHE_TTL&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#40a070">3600&lt;/span> &lt;span style="color:#60a0b0;font-style:italic"># 1 hour&lt;/span>
&lt;span style="color:#bb60d5">KEY_USER&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;myapp_user&amp;#34;&lt;/span>
&lt;span style="color:#bb60d5">KEY_TOKEN&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;myapp_token&amp;#34;&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Try to get from kernel keyring cache&lt;/span>
get_cached&lt;span style="color:#666">()&lt;/span> &lt;span style="color:#666">{&lt;/span>
&lt;span style="color:#007020">local&lt;/span> key_id
&lt;span style="color:#bb60d5">key_id&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>keyctl search @u user &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$1&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> 2&amp;gt;/dev/null&lt;span style="color:#007020;font-weight:bold">)&lt;/span> &lt;span style="color:#666">||&lt;/span> &lt;span style="color:#007020;font-weight:bold">return&lt;/span> &lt;span style="color:#40a070">0&lt;/span>
keyctl pipe &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$key_id&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> 2&amp;gt;/dev/null
&lt;span style="color:#666">}&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Store in kernel keyring with TTL&lt;/span>
cache_credential&lt;span style="color:#666">()&lt;/span> &lt;span style="color:#666">{&lt;/span>
&lt;span style="color:#007020">local&lt;/span> key_id
&lt;span style="color:#bb60d5">key_id&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>keyctl add user &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$1&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$2&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> @u&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
keyctl timeout &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$key_id&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$CACHE_TTL&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
&lt;span style="color:#666">}&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Try cache first&lt;/span>
&lt;span style="color:#bb60d5">API_USER&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>get_cached &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$KEY_USER&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#bb60d5">API_TOKEN&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>get_cached &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$KEY_TOKEN&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># If not cached, fetch from Proton Pass&lt;/span>
&lt;span style="color:#007020;font-weight:bold">if&lt;/span> &lt;span style="color:#666">[&lt;/span> -z &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$API_USER&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#666">]&lt;/span> &lt;span style="color:#666">||&lt;/span> &lt;span style="color:#666">[&lt;/span> -z &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$API_TOKEN&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#666">]&lt;/span>; &lt;span style="color:#007020;font-weight:bold">then&lt;/span>
&lt;span style="color:#bb60d5">CREDS_JSON&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$PASS_CLI&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> item view --vault-name &lt;span style="color:#4070a0">&amp;#34;Personal&amp;#34;&lt;/span> --item-title &lt;span style="color:#4070a0">&amp;#34;My API&amp;#34;&lt;/span> --output json&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#bb60d5">API_USER&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$CREDS_JSON&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> | jq -r &lt;span style="color:#4070a0">&amp;#39;.item.content.content.Custom.sections[0].section_fields[] | select(.name == &amp;#34;Username&amp;#34;) | .content.Text&amp;#39;&lt;/span>&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#bb60d5">API_TOKEN&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$CREDS_JSON&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> | jq -r &lt;span style="color:#4070a0">&amp;#39;.item.content.content.Custom.sections[0].section_fields[] | select(.name == &amp;#34;Token&amp;#34;) | .content.Hidden&amp;#39;&lt;/span>&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Cache for next time&lt;/span>
cache_credential &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$KEY_USER&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$API_USER&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
cache_credential &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$KEY_TOKEN&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$API_TOKEN&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
&lt;span style="color:#007020;font-weight:bold">fi&lt;/span>
&lt;span style="color:#007020">export&lt;/span> API_USER API_TOKEN
&lt;span style="color:#007020">exec&lt;/span> my-app &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$@&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>With caching:&lt;/p>
&lt;ul>
&lt;li>First run: ~5-6 seconds (fetches from Proton Pass)&lt;/li>
&lt;li>Subsequent runs: ~0.01 seconds (from kernel keyring)&lt;/li>
&lt;/ul>
&lt;p>The cache expires after one hour, or when you log out. Clear it manually with:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">keyctl purge user myapp_user
keyctl purge user myapp_token
&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="built-in-secret-injection">Built-in Secret Injection&lt;/h3>
&lt;p>The CLI also has built-in commands for secret injection. The &lt;code>run&lt;/code> command passes secrets as environment variables:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli run --env-file .env.template -- ./my-script.sh
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The &lt;code>inject&lt;/code> command processes template files:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli inject -i config.template -o config.conf
&lt;/code>&lt;/pre>&lt;/div>&lt;p>These use a URI syntax: &lt;code>pass://vault/item/field&lt;/code> to reference secrets.&lt;/p>
&lt;h3 id="updating-config-files">Updating Config Files&lt;/h3>
&lt;p>For applications that read credentials from config files (like WeeChat&amp;rsquo;s &lt;code>sec.conf&lt;/code>), the wrapper can update the file before launching:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#007020">#!/bin/bash
&lt;/span>&lt;span style="color:#007020">&lt;/span>&lt;span style="color:#007020">set&lt;/span> -e
&lt;span style="color:#bb60d5">SEC_CONF&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$HOME&lt;/span>&lt;span style="color:#4070a0">/.config/myapp/secrets.conf&amp;#34;&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Fetch password from Proton Pass&lt;/span>
&lt;span style="color:#bb60d5">PASSWORD&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$HOME&lt;/span>&lt;span style="color:#4070a0">/.local/bin/pass-cli&amp;#34;&lt;/span> item view --vault-name &lt;span style="color:#4070a0">&amp;#34;Personal&amp;#34;&lt;/span> --item-title &lt;span style="color:#4070a0">&amp;#34;My Service&amp;#34;&lt;/span> --field password&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Update config file&lt;/span>
sed -i &lt;span style="color:#4070a0">&amp;#34;s/^password = \&amp;#34;.*\&amp;#34;/password = \&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$PASSWORD&lt;/span>&lt;span style="color:#4070a0">\&amp;#34;/&amp;#34;&lt;/span> &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$SEC_CONF&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
&lt;span style="color:#007020">exec&lt;/span> myapp &lt;span style="color:#4070a0">&amp;#34;&lt;/span>&lt;span style="color:#bb60d5">$@&lt;/span>&lt;span style="color:#4070a0">&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="ssh-agent-integration">SSH Agent Integration&lt;/h3>
&lt;p>The CLI can also act as an SSH agent, loading keys stored in Proton Pass:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">pass-cli ssh-agent --help
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This is useful if you store SSH private keys in your vault.&lt;/p>
&lt;h3 id="security-considerations">Security Considerations&lt;/h3>
&lt;p>This approach keeps secrets out of your dotfiles repository entirely. The wrapper scripts reference Proton Pass item names, not actual credentials. Your secrets remain encrypted in Proton&amp;rsquo;s infrastructure and are only decrypted locally when needed.&lt;/p>
&lt;p>The kernel keyring cache is per-user and lives only in memory. It&amp;rsquo;s cleared on logout or reboot, and the TTL ensures credentials don&amp;rsquo;t persist indefinitely.&lt;/p>
&lt;p>For public dotfiles repositories, this is a clean solution: commit your wrapper scripts freely, keep your secrets in Proton Pass.&lt;/p></description></item><item><title>Scheduled Deploys for Future Posts</title><link>https://blog.dmcc.io/journal/scheduled-deploys-for-future-posts/</link><pubDate>Fri, 02 Jan 2026 00:00:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/scheduled-deploys-for-future-posts/#2026-01-01</guid><description>&lt;p>One of the small joys of running a static blog is scheduling posts in advance. Write a few pieces when inspiration strikes, set future dates, and let them publish themselves while you&amp;rsquo;re busy with other things.&lt;/p>
&lt;p>There&amp;rsquo;s just one problem: static sites don&amp;rsquo;t work that way out of the box.&lt;/p>
&lt;h2 id="the-problem-with-static-sites">The problem with static sites&lt;/h2>
&lt;p>With a dynamic CMS like WordPress, scheduling is built in. The server checks the current time, compares it to your post&amp;rsquo;s publish date, and serves it up when the moment arrives. Simple.&lt;/p>
&lt;p>Static site generators like Hugo work differently. When you build the site, Hugo looks at all your content, checks which posts have dates in the past, and generates HTML for those. Future-dated posts get skipped entirely. They don&amp;rsquo;t exist in the built output.&lt;/p>
&lt;p>This means if you write a post today with tomorrow&amp;rsquo;s date, it won&amp;rsquo;t appear until you rebuild the site tomorrow. And if you&amp;rsquo;re using Netlify&amp;rsquo;s automatic deploys from Git, that rebuild only happens when you push a commit. No commit, no deploy, no post.&lt;/p>
&lt;p>I could set a reminder to push an empty commit every morning. But that defeats the purpose of scheduling posts in the first place.&lt;/p>
&lt;h2 id="the-solution-scheduled-builds">The solution: scheduled builds&lt;/h2>
&lt;p>The fix is straightforward: trigger a Netlify build automatically every day, whether or not there&amp;rsquo;s new code to deploy.&lt;/p>
&lt;p>Netlify provides &lt;a href="https://docs.netlify.com/configure-builds/build-hooks/">build hooks&lt;/a> for exactly this purpose. A build hook is a unique URL that triggers a new deploy when you send a POST request to it. All you need is something to call that URL on a schedule.&lt;/p>
&lt;p>GitHub Actions handles the scheduling side. A simple workflow with a cron trigger runs every day at midnight UK time and pings the build hook. Netlify does the rest.&lt;/p>
&lt;h2 id="the-setup">The setup&lt;/h2>
&lt;p>First, create a build hook in Netlify:&lt;/p>
&lt;ol>
&lt;li>Go to your site&amp;rsquo;s dashboard&lt;/li>
&lt;li>Navigate to &lt;strong>Site settings → Build &amp;amp; deploy → Build hooks&lt;/strong>&lt;/li>
&lt;li>Click &lt;strong>Add build hook&lt;/strong>, give it a name, and select your production branch&lt;/li>
&lt;li>Copy the generated URL&lt;/li>
&lt;/ol>
&lt;p>Next, add that URL as a secret in your GitHub repository:&lt;/p>
&lt;ol>
&lt;li>Go to &lt;strong>Settings → Secrets and variables → Actions&lt;/strong>&lt;/li>
&lt;li>Create a new repository secret called &lt;code>NETLIFY_BUILD_HOOK&lt;/code>&lt;/li>
&lt;li>Paste the build hook URL as the value&lt;/li>
&lt;/ol>
&lt;p>Finally, create a workflow file at &lt;code>.github/workflows/scheduled-deploy.yml&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>Scheduled&lt;span style="color:#bbb"> &lt;/span>Netlify&lt;span style="color:#bbb"> &lt;/span>Deploy&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">on&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">schedule&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Runs at 00:01 UK time (covers both GMT and BST)&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">cron&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#39;1 0 * * *&amp;#39;&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># 00:01 GMT (winter)&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">cron&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#39;1 23 * * *&amp;#39;&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># 00:01 BST (summer)&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">workflow_dispatch&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Allow manual trigger&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">jobs&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">deploy&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">runs-on&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>ubuntu-latest&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">steps&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>Trigger&lt;span style="color:#bbb"> &lt;/span>Netlify&lt;span style="color:#bbb"> &lt;/span>Build&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">run&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>curl&lt;span style="color:#bbb"> &lt;/span>-X&lt;span style="color:#bbb"> &lt;/span>POST&lt;span style="color:#bbb"> &lt;/span>-d&lt;span style="color:#bbb"> &lt;/span>{}&lt;span style="color:#bbb"> &lt;/span>${{&lt;span style="color:#bbb"> &lt;/span>secrets.NETLIFY_BUILD_HOOK&lt;span style="color:#bbb"> &lt;/span>}}&lt;span style="color:#bbb">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The dual cron schedule handles UK daylight saving time. During winter (GMT), the first schedule fires at midnight. During summer (BST), the second one does. There&amp;rsquo;s a brief overlap during the DST transitions where both might run, but an extra deploy is harmless.&lt;/p>
&lt;p>The &lt;code>workflow_dispatch&lt;/code> trigger is optional but handy. It adds a &amp;ldquo;Run workflow&amp;rdquo; button in the GitHub Actions UI, letting you trigger a deploy manually without pushing a commit.&lt;/p>
&lt;h2 id="the-result">The result&lt;/h2>
&lt;p>Now every morning at 00:01, GitHub Actions wakes up, pokes the Netlify build hook, and a fresh deploy rolls out. Any posts with today&amp;rsquo;s date appear automatically. No manual intervention required.&lt;/p>
&lt;p>It&amp;rsquo;s a small piece of automation, but it removes just enough friction to make scheduling posts actually practical. Write when you want, publish when you planned.&lt;/p></description></item><item><title>Leaving Spotify for Self-Hosted Audio</title><link>https://blog.dmcc.io/journal/spotify-to-self-hosted/</link><pubDate>Wed, 31 Dec 2025 21:00:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/spotify-to-self-hosted/#2025-12-31</guid><description>&lt;p>I&amp;rsquo;ve been a Spotify subscriber for years. It&amp;rsquo;s convenient, the catalogue is vast, and the recommendations used to be genuinely useful. But lately, I&amp;rsquo;ve found myself increasingly uncomfortable with the direction the platform is heading.&lt;/p>
&lt;h2 id="the-spotify-problem">The Spotify problem&lt;/h2>
&lt;p>It&amp;rsquo;s hard to pin down exactly when Spotify stopped feeling like a music service and started feeling like something else entirely. A few things have been gnawing at me:&lt;/p>
&lt;p>&lt;strong>Artist compensation is broken.&lt;/strong> The per-stream payout is &lt;a href="https://techcrunch.com/2025/03/11/spotify-says-its-payouts-are-getting-better-but-artists-still-disagree/">famously tiny&lt;/a>, and the model actively discourages the kind of music I actually want to support. Albums that reward repeated listening lose out to background playlist fodder designed to rack up streams. In 2024, Spotify &lt;a href="https://www.newschoolfreepress.com/2025/12/08/spotify-isnt-your-friend-how-the-platform-takes-your-money-while-artists-pay-the-price/">stopped paying royalties entirely&lt;/a> for any track under 1,000 streams, demonetising an estimated 86% of music on the platform.&lt;/p>
&lt;p>&lt;strong>The interface is hostile.&lt;/strong> Every update seems to prioritise podcasts, audiobooks, and algorithmically-generated content over letting me play my own playlists. The homepage is a mess of things I didn&amp;rsquo;t ask for.&lt;/p>
&lt;p>&lt;strong>AI-generated music is creeping in.&lt;/strong> There&amp;rsquo;s been a &lt;a href="https://www.npr.org/2025/08/08/nx-s1-5492314/ai-music-streaming-services-spotify">wave of low-effort AI tracks flooding the platform&lt;/a>, often mimicking real artists or filling ambient playlists. Spotify removed &lt;a href="https://rebelmusicdistribution.com/2025/09/30/spotify-removes-75-million-fake-tracks-in-2024-what-it-means-for-artists-in-2025/">over 75 million spam tracks&lt;/a> in 2024 alone. It feels like the beginning of a race to the bottom, where quantity beats quality and genuine artists get drowned out.&lt;/p>
&lt;p>&lt;strong>I don&amp;rsquo;t own anything.&lt;/strong> After years of subscription payments, I have nothing to show for it. If Spotify disappears tomorrow, or removes an album I love, it&amp;rsquo;s just gone.&lt;/p>
&lt;p>&lt;strong>The company&amp;rsquo;s direction feels off.&lt;/strong> Beyond the platform itself, there&amp;rsquo;s the question of what Spotify&amp;rsquo;s leadership prioritises. CEO Daniel Ek has been &lt;a href="https://www.cnbc.com/2025/06/17/spotifys-daniel-ek-leads-investment-in-defense-startup-helsing.html">investing heavily in European defense technology&lt;/a>. That&amp;rsquo;s his prerogative, of course, but it underlines that my subscription money flows to a company whose priorities don&amp;rsquo;t align with mine.&lt;/p>
&lt;p>&lt;strong>Spotify Wrapped was a wake-up call.&lt;/strong> In previous years, Wrapped felt like a fun novelty. This year it was a reminder that I don&amp;rsquo;t actually listen to that many artists. The ones I enjoy, I play on repeat. So why am I paying a monthly subscription to listen to the same songs over and over? The artists I love aren&amp;rsquo;t seeing much from those streams, and I&amp;rsquo;m essentially renting music at an increasingly high cost. The family plan price keeps creeping up, and for what? The privilege of temporarily accessing albums I could just buy outright?&lt;/p>
&lt;h2 id="the-alternatives-arent-much-better">The alternatives aren&amp;rsquo;t much better&lt;/h2>
&lt;p>The obvious response is &amp;ldquo;just switch to another service.&amp;rdquo; But the alternatives have their own problems.&lt;/p>
&lt;p>&lt;strong>YouTube Music / Google&lt;/strong> shares many of Spotify&amp;rsquo;s issues, with the added concern that both platforms &lt;a href="https://money.cnn.com/2018/04/19/technology/youtube-ads-extreme-content-investigation/index.html">profit from advertising revenue&lt;/a> that flows from some less than savoury sources. When your business model depends on engagement at any cost, the incentives get murky fast.&lt;/p>
&lt;p>&lt;strong>Apple Music&lt;/strong> locks you further into an ecosystem and has its own history of prioritising platform control over user freedom.&lt;/p>
&lt;p>&lt;strong>Tidal&lt;/strong> is perhaps the current outlier. Better artist payouts, lossless audio as standard, and seemingly fewer of the dark patterns plaguing the others. But streaming services have a habit of starting idealistic and drifting toward the mean once growth becomes the priority. How long until Tidal follows the same path? I&amp;rsquo;d rather not find out by having my library disappear when they pivot.&lt;/p>
&lt;p>The fundamental problem isn&amp;rsquo;t any single company. It&amp;rsquo;s the streaming model itself. When you rent access instead of owning files, you&amp;rsquo;re always at the mercy of corporate decisions you have no control over.&lt;/p>
&lt;h2 id="what-i-actually-want">What I actually want&lt;/h2>
&lt;p>When I thought about what I wanted from music, the list was simple:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Ownership&lt;/strong> - Files that live on my hardware, that I control&lt;/li>
&lt;li>&lt;strong>Quality&lt;/strong> - Lossless audio, not compressed streams&lt;/li>
&lt;li>&lt;strong>No algorithms&lt;/strong> - I&amp;rsquo;ll decide what to listen to, thanks&lt;/li>
&lt;li>&lt;strong>Supporting artists&lt;/strong> - Buying albums directly puts more money in their pockets than years of streaming&lt;/li>
&lt;/ol>
&lt;h2 id="the-setup">The setup&lt;/h2>
&lt;p>I&amp;rsquo;ve landed on a self-hosted Plex library for my music collection, served up with Plexamp on all my devices.&lt;/p>
&lt;p>Plexamp is genuinely excellent. It&amp;rsquo;s a dedicated music player built by Plex, and it feels like it was designed by people who actually care about listening to music rather than optimising engagement metrics. Clean interface, proper gapless playback, and features like sonic exploration that help with discovery without feeling algorithmic.&lt;/p>
&lt;p>The client availability sealed the deal. Plexamp runs on iOS, Android, macOS, Windows, and Linux. The only gap is native car integration, but Bluetooth fills that role with minimal friction. Connect, play, done.&lt;/p>
&lt;p>The server side is just Plex running on my existing home server. Music files live on local storage, backed up properly, under my control. No subscription required for basic playback, though Plex Pass unlocks some Plexamp features.&lt;/p>
&lt;h2 id="what-the-flac-is-lossless">What the FLAC is lossless?&lt;/h2>
&lt;p>One of the benefits of owning your music files is choosing the quality. My entire library is FLAC: lossless audio that preserves every detail from the original recording.&lt;/p>
&lt;p>To be honest, I can&amp;rsquo;t reliably tell the difference between Spotify&amp;rsquo;s high-quality streams and lossless audio on my current setup. Most people can&amp;rsquo;t. But that&amp;rsquo;s not really the point.&lt;/p>
&lt;p>Audio technology keeps improving. Better headphones, better DACs, better speakers. The music I&amp;rsquo;m collecting now might be played on equipment that doesn&amp;rsquo;t exist yet. By storing everything in lossless, I&amp;rsquo;m preserving the highest possible quality for whatever the future brings. I&amp;rsquo;d rather have more data than I need today than wish I&amp;rsquo;d kept it later.&lt;/p>
&lt;p>With streaming, you get whatever quality the service decides to give you. With my own files, the choice is mine.&lt;/p>
&lt;h2 id="where-to-get-music">Where to get music&lt;/h2>
&lt;p>Bandcamp is the obvious choice for buying digital music directly. Artists get a better cut, you get lossless files, and there&amp;rsquo;s a strong community around it. In theory, it&amp;rsquo;s perfect.&lt;/p>
&lt;p>In practice, I find the search experience frustrating. Getting to the specific artist and album I want feels slower than it should. Maybe I&amp;rsquo;m spoiled by years of Spotify&amp;rsquo;s instant search, but the friction is noticeable. For now, I&amp;rsquo;m putting up with it because the alternatives are worse, but I&amp;rsquo;m constantly searching for something better.&lt;/p>
&lt;p>If you know of a good source for purchasing lossless music with a decent search experience, I&amp;rsquo;d love to hear about it.&lt;/p>
&lt;h2 id="what-ill-miss">What I&amp;rsquo;ll miss&lt;/h2>
&lt;p>I won&amp;rsquo;t pretend this is all upside. Spotify&amp;rsquo;s discovery features, when they worked, introduced me to artists I genuinely love. The convenience of having everything available instantly is hard to replicate. And sharing music with friends becomes more complicated when you can&amp;rsquo;t just send a link.&lt;/p>
&lt;p>But those trade-offs feel worth it. I&amp;rsquo;d rather have a smaller collection of music I actually own than endless access to a library that&amp;rsquo;s increasingly polluted with content designed to game the algorithm rather than move the listener.&lt;/p>
&lt;h2 id="the-transition">The transition&lt;/h2>
&lt;p>I won&amp;rsquo;t sugarcoat it: the friction to switch has been fairly high.&lt;/p>
&lt;p>Ripping, cataloguing, and transferring content is one thing. The curated playlists from years gone by are another. Those playlists represent hours of listening, discovering, and refining. Losing them felt like losing a part of my music history.&lt;/p>
&lt;p>&lt;a href="https://soundiiz.com/auto-sync-playlist">Soundiiz&lt;/a> came in handy here, automatically copying playlists across to Plex. It worked well for most of the heavy lifting. But invariably there&amp;rsquo;s a song on a crucial playlist that I just don&amp;rsquo;t own yet, leaving a gap. Until I fill those gaps, the migration doesn&amp;rsquo;t feel complete.&lt;/p>
&lt;p>It&amp;rsquo;s a slow process. Every missing track is a reminder that I&amp;rsquo;m rebuilding something that took years to accumulate. But each album I add is mine now, permanently, and that makes the effort feel worthwhile.&lt;/p></description></item><item><title>Omarchy Hardening</title><link>https://blog.dmcc.io/journal/omarchy-hardening/</link><pubDate>Mon, 29 Dec 2025 11:00:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/omarchy-hardening/#2025-12-29</guid><description>&lt;p>A few weeks ago, I came across &lt;a href="https://xn--gckvb8fzb.com/a-word-on-omarchy/">A Word on Omarchy&lt;/a> which highlighted some security gaps in Omarchy&amp;rsquo;s default configuration. Things like LLMNR being enabled, UFW configured but not actually running, and relaxed login attempt limits.&lt;/p>
&lt;p>The post resonated with me. Omarchy is a fantastic opinionated setup for Arch Linux with Hyprland, but like any distribution that prioritises convenience, some security defaults get loosened in the process. That&amp;rsquo;s not necessarily wrong, it&amp;rsquo;s a trade-off, but it&amp;rsquo;s worth knowing about.&lt;/p>
&lt;p>So I built &lt;a href="https://github.com/dannymcc/omarchy-hardening">Omarchy Hardening&lt;/a>.&lt;/p>
&lt;h2 id="what-it-does">What it does&lt;/h2>
&lt;p>It&amp;rsquo;s an interactive terminal script that walks you through five hardening options:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Disable LLMNR&lt;/strong> - Prevents name poisoning attacks on local networks&lt;/li>
&lt;li>&lt;strong>Enable UFW Firewall&lt;/strong> - For earlier Omarchy versions where UFW wasn&amp;rsquo;t enabled by default&lt;/li>
&lt;li>&lt;strong>Tailscale-only SSH&lt;/strong> - Restricts SSH to your Tailscale network, making it invisible to the public internet&lt;/li>
&lt;li>&lt;strong>Limit Login Attempts&lt;/strong> - Reduces failed attempts from 10 back to 3 before lockout&lt;/li>
&lt;li>&lt;strong>Configure Git Signing&lt;/strong> - Enables SSH commit signing for verified commits&lt;/li>
&lt;/ol>
&lt;p>Each option shows exactly what will change before you confirm. Nothing is selected by default.&lt;/p>
&lt;h2 id="a-word-of-caution">A word of caution&lt;/h2>
&lt;p>The script opens with a warning, and I&amp;rsquo;ll repeat it here: &lt;strong>you should not rely on automation to secure your system&lt;/strong>.&lt;/p>
&lt;p>The best approach is to understand your distribution and make these changes yourself. Read the source code. Run the commands manually. This builds knowledge you&amp;rsquo;ll need when things go wrong.&lt;/p>
&lt;p>The tool exists to demonstrate what these changes look like and to make them easier to apply consistently. But it&amp;rsquo;s not a substitute for understanding.&lt;/p>
&lt;h2 id="whats-next">What&amp;rsquo;s next&lt;/h2>
&lt;p>If you&amp;rsquo;re curious about going further, the README includes a section on additional hardening steps. &lt;a href="https://github.com/evilsocket/opensnitch">OpenSnitch&lt;/a> is worth particular attention. It&amp;rsquo;s an application-level firewall that prompts you whenever a program tries to make a network connection. Educational and practical.&lt;/p>
&lt;p>The code is on GitHub: &lt;a href="https://github.com/dannymcc/omarchy-hardening">dannymcc/omarchy-hardening&lt;/a>&lt;/p></description></item><item><title>ZeroNet: The Web Without Servers</title><link>https://blog.dmcc.io/journal/zeronet-decentralised-web/</link><pubDate>Mon, 29 Dec 2025 01:00:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/zeronet-decentralised-web/#2025-12-29</guid><description>&lt;p>I&amp;rsquo;ve been exploring &lt;a href="https://github.com/HelloZeroNet/ZeroNet">ZeroNet&lt;/a> recently, a peer-to-peer web platform that&amp;rsquo;s been around since 2015 but still feels like a glimpse of what the internet &lt;em>could&lt;/em> be. It&amp;rsquo;s not mainstream, and it&amp;rsquo;s not trying to be. But for anyone who cares about decentralisation and censorship-resistance, it&amp;rsquo;s worth understanding.&lt;/p>
&lt;h2 id="what-it-is">What It Is&lt;/h2>
&lt;p>ZeroNet is a decentralised network where websites exist without traditional servers. Instead of requesting a page from a server somewhere, your browser downloads it from other users who already have it. Think BitTorrent, but for websites. Once you&amp;rsquo;ve visited a site, you become a host for it too. The more people visit, the more resilient the site becomes.&lt;/p>
&lt;p>There&amp;rsquo;s no company to take to court. No single point of failure. No domain registrar that can be pressured into pulling the plug.&lt;/p>
&lt;h2 id="how-it-works">How It Works&lt;/h2>
&lt;p>The technical bits are surprisingly elegant. ZeroNet uses Bitcoin cryptography for identity. Each site has a unique address derived from a public/private key pair. The site owner signs updates with their private key, and everyone can verify those signatures. This means content can be updated, but only by whoever holds the key. No passwords, no accounts, no centralised authentication.&lt;/p>
&lt;p>Content is distributed using BitTorrent&amp;rsquo;s protocol. When you visit a ZeroNet site, you&amp;rsquo;re downloading it from peers and simultaneously seeding it to others. Sites are essentially signed archives that propagate across the network.&lt;/p>
&lt;p>For privacy, ZeroNet can route traffic through Tor. It&amp;rsquo;s optional, but turning it on means your IP address isn&amp;rsquo;t visible to other peers. Combined with the fact that there&amp;rsquo;s no central server logging requests, the privacy properties are genuinely interesting.&lt;/p>
&lt;h2 id="why-im-interested">Why I&amp;rsquo;m Interested&lt;/h2>
&lt;p>My interest in ZeroNet ties directly into my broader &lt;a href="https://blog.dmcc.io/privacy/">views on privacy&lt;/a>. I&amp;rsquo;m not naive about the limitations of decentralised systems, or the fact that censorship resistance can protect content that probably shouldn&amp;rsquo;t be protected. But there&amp;rsquo;s something valuable in understanding how these networks function.&lt;/p>
&lt;p>The centralised web has become remarkably fragile. A handful of companies control most of the infrastructure, and they&amp;rsquo;re increasingly subject to political and legal pressure. That&amp;rsquo;s sometimes appropriate. Nobody wants to defend genuinely harmful content. But the tools of control, once built, don&amp;rsquo;t stay confined to their intended purpose.&lt;/p>
&lt;p>ZeroNet represents a different architecture entirely. It&amp;rsquo;s not about evading accountability, it&amp;rsquo;s about &lt;em>distributing&lt;/em> it. Instead of trusting a company to host your content and hoping they don&amp;rsquo;t change their terms of service, you trust mathematics. The trade-offs are real: slower access, no search engines worth mentioning, and a user experience that assumes technical competence. But those are engineering problems, not fundamental limitations.&lt;/p>
&lt;p>I&amp;rsquo;m not suggesting everyone should abandon the normal web for ZeroNet. That would be impractical and unnecessary. But understanding how decentralised alternatives work feels increasingly important. The architecture of the tools we use shapes what&amp;rsquo;s possible, and diversity in that architecture is probably healthy.&lt;/p>
&lt;p>For now, I&amp;rsquo;m treating ZeroNet as an experiment. Something to explore and learn from rather than rely on. But in a world where digital infrastructure is more contested than ever, it&amp;rsquo;s useful to know that alternatives exist.&lt;/p>
&lt;p>Thanks to &lt;a href="https://xn--gckvb8fzb.com/infrastructure/#decentralized-networks--darknets">ポテト&lt;/a> for pointing me towards ZeroNet.&lt;/p></description></item><item><title>Value</title><link>https://blog.dmcc.io/journal/value/</link><pubDate>Sun, 14 Dec 2025 10:00:00 +0000</pubDate><guid>https://blog.dmcc.io/journal/value/#2025-12-28</guid><description>&lt;p>I recently passed my advanced motorcycle test with the IAM. A F1RST, no less. The highest grade. And within hours of getting the result, I&amp;rsquo;d already started telling myself it wasn&amp;rsquo;t that impressive.&lt;/p>
&lt;p>This happens every time. The thing I&amp;rsquo;ve been working toward, the qualification, the goal, the milestone, suddenly feels smaller the moment I reach it. Not worthless, exactly. Just&amp;hellip; less. As though the act of achieving it somehow deflated the whole thing.&lt;/p>
&lt;p>I don&amp;rsquo;t think I&amp;rsquo;m alone in this. There&amp;rsquo;s a term for it: the arrival fallacy. The idea that we assume reaching a destination will bring lasting satisfaction, only to find that satisfaction evaporates almost the moment we arrive. We spend months, sometimes years, chasing something, convinced it matters. And then we get there, look around, and think: is that it?&lt;/p>
&lt;p>Some of it is just recalibration. What felt like a stretch yesterday becomes the new baseline today. But I think there&amp;rsquo;s something else going on too, at least for me. Once I&amp;rsquo;ve done something, I can see exactly how I did it. Every small step, every moment of doubt, every bit of luck. The mystery disappears. And without the mystery, it stops feeling like an achievement. It starts feeling inevitable, even though it wasn&amp;rsquo;t.&lt;/p>
&lt;p>The IAM test is a good example. It has a real failure rate. You can&amp;rsquo;t charm your way through it or stumble into a F1RST by accident. An examiner who doesn&amp;rsquo;t know you, doesn&amp;rsquo;t care about your backstory, watches you ride for over an hour and makes a judgement. It&amp;rsquo;s external. Objective. Difficult. And yet, within a day, my brain had already filed it under things that were always going to happen.&lt;/p>
&lt;p>I suspect this is a feature, not a bug. If we stayed satisfied, we&amp;rsquo;d stop striving. The restlessness that makes achievements feel hollow is probably the same restlessness that pushed us toward them in the first place. But knowing that doesn&amp;rsquo;t make it less frustrating. It just makes it feel like a trap, one we set for ourselves, over and over again.&lt;/p>
&lt;p>So what&amp;rsquo;s the answer? I&amp;rsquo;m not sure there is one. Maybe the goal isn&amp;rsquo;t to feel permanently satisfied, but to get better at pausing. At noticing. At letting the moment land before the next one arrives. Maybe value isn&amp;rsquo;t something we feel. It&amp;rsquo;s something we decide to assign, consciously, before our brains have a chance to reclassify it as ordinary.&lt;/p>
&lt;p>I passed my advanced test. I got a F1RST. That matters. Even if I have to keep reminding myself.&lt;/p></description></item><item><title>The Endless Hunt for Productivity Nirvana</title><link>https://blog.dmcc.io/journal/the-endless-hunt-for-productivity-nirvana/</link><pubDate>Thu, 12 Jun 2025 08:01:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/the-endless-hunt-for-productivity-nirvana/#2025-12-28</guid><description>&lt;p>I&amp;rsquo;ve been chasing the perfect productivity setup for longer than I care to admit. The signs are all there: a Downloads folder cluttered with productivity apps, browser bookmarks organised by system acronyms, and that familiar feeling of starting fresh with yet another note-taking tool, convinced that &lt;em>this time&lt;/em> will be different.&lt;/p>
&lt;p>My digital graveyard is extensive. NotePlan, Microsoft OneNote, Apple Notes, Google Keep, Notion, Airtable, Logseq, Google Docs, Obsidian, Simple Notes — I&amp;rsquo;ve installed them all, configured them meticulously, and abandoned them with the same predictable rhythm. But it doesn&amp;rsquo;t stop there. I&amp;rsquo;ve ventured into the analogue world too: expensive Field Notes that made me feel like a proper writer, simple yellow legal pads that promised no-nonsense functionality, and even those futuristic smart pens with tiny cameras that record your scribbles for digital playback.&lt;/p>
&lt;p>Then there&amp;rsquo;s the hardware rabbit hole. iPads of every generation, each promising to bridge the gap between digital and analogue. The reMarkable, which felt revolutionary until it became another expensive paperweight. Every purchase accompanied by the same ritual: diving deep into YouTube tutorials, hunting for the &amp;ldquo;perfect&amp;rdquo; workflow that would finally unlock productivity nirvana.&lt;/p>
&lt;p>If you&amp;rsquo;ve been down this path, you&amp;rsquo;ll recognise the cast of characters. Tiago Forte with his PARA system (Projects, Areas, Resources, Archive). Cal Newport advocating for CCCC (Craft, Constitution, Community, Contemplation). Matt Perman&amp;rsquo;s PFFSP framework (Personal, Family, Faith, Social, Professional). Each guru promising their system is &lt;em>the&lt;/em> system, backed by years of refinement and countless success stories.&lt;/p>
&lt;p>I&amp;rsquo;ve tried them all. Every acronym, every methodology, every perfectly structured folder hierarchy. And here&amp;rsquo;s what I&amp;rsquo;ve realised — not in a lightbulb moment, but in a proper &amp;ldquo;what the hell have I been doing&amp;rdquo; revelation: I&amp;rsquo;ve been building the cart before I&amp;rsquo;ve even found the horse.&lt;/p>
&lt;h2 id="the-setup-trap">The Setup Trap&lt;/h2>
&lt;p>Time and again, I&amp;rsquo;d discover a new system and immediately set about constructing the perfect framework. Folders meticulously organised, tags carefully planned, templates crafted with surgical precision. I&amp;rsquo;d spend hours, sometimes days, building this beautiful, empty structure. Then I&amp;rsquo;d sit down to actually &lt;em>use&lt;/em> it and find myself paralysed by my own over-engineering.&lt;/p>
&lt;p>Where does this thought go? Which folder? What tags? Does this count as a project or an area? Should I create a new template for this type of note? The system I&amp;rsquo;d built to enhance my thinking had become a barrier to it.&lt;/p>
&lt;h2 id="what-actually-works">What Actually Works&lt;/h2>
&lt;p>Right now, I&amp;rsquo;m using Obsidian. But this time, I swear it&amp;rsquo;s different. I&amp;rsquo;ve resisted the urge to turn it into a productivity monument. No elaborate folder structures based on someone else&amp;rsquo;s methodology. No collection of 100 plugins to make it look &amp;ldquo;professional.&amp;rdquo; Just a simple folder called Notes, with basic subfolders that make intuitive sense to me. If I had to guess which folder contains a specific note based purely on its title, I&amp;rsquo;d probably be right. That&amp;rsquo;s my only organisational principle.&lt;/p>
&lt;p>I&amp;rsquo;ve allowed myself exactly three plugins beyond Obsidian&amp;rsquo;s defaults: Mononote, which keeps things tidy by limiting one tab per note; Rollover Daily Todos, which simply moves unfinished tasks to tomorrow&amp;rsquo;s daily note; and a custom &lt;a href="https://github.com/dannymcc/Granola-to-Obsidian">Granola Sync plugin&lt;/a> I developed to import AI-generated meeting notes. That&amp;rsquo;s it. No baroque plugin ecosystem, no elaborate theming, no productivity theatre.&lt;/p>
&lt;p>But here&amp;rsquo;s what I&amp;rsquo;ve learned: I still need paper. Not for the romantic notion of analogue permanence, but for something more fundamental. There&amp;rsquo;s a cognitive switch that flips when I put pen to paper while working through a problem. It doesn&amp;rsquo;t matter if I bin the paper afterwards or never reference it again. The act of writing — the physical motion, the slight resistance of ink on paper — commits not just the words to memory, but the entire context. The conversation, the feeling, the broader picture.&lt;/p>
&lt;p>I can&amp;rsquo;t explain the neurology behind this, and I don&amp;rsquo;t know if it&amp;rsquo;s universal or just my peculiar wiring. But it works. &lt;a href="https://youtu.be/tDmjz6HB-yw?si=tW89e4zkJ5RlYQOe">Sam Altman, CEO of OpenAI, has spoken about&lt;/a> how a simple notepad remains his go-to tool despite having access to the world&amp;rsquo;s most advanced AI. If that&amp;rsquo;s not a endorsement for keeping things simple, I don&amp;rsquo;t know what is.&lt;/p>
&lt;h2 id="the-real-problem">The Real Problem&lt;/h2>
&lt;p>The productivity industrial complex wants us to believe that the right system will transform us into efficiency machines. That the perfect app, properly configured, will unlock some higher version of ourselves. But after years of chasing this dragon, I&amp;rsquo;ve come to suspect that the problem was never the tools.&lt;/p>
&lt;p>The problem is that we&amp;rsquo;re treating systems as solutions rather than supports. We&amp;rsquo;re looking for external structures to impose order on internal chaos, when what we really need is to develop better thinking habits. No folder structure, however elegant, can compensate for unclear thinking. No tagging system can organise thoughts that haven&amp;rsquo;t been properly formed.&lt;/p>
&lt;p>The &lt;a href="https://www.reddit.com/r/PKMS/">Reddit Personal Knowledge Management System community&lt;/a> is full of people sharing elaborate setups, complex workflows, and sophisticated integrations. It&amp;rsquo;s fascinating to browse, but also revealing. Much of the discussion centres around the systems themselves rather than the knowledge they&amp;rsquo;re meant to manage. We&amp;rsquo;ve become obsessed with the scaffolding while forgetting about the building.&lt;/p>
&lt;h2 id="starting-simple">Starting Simple&lt;/h2>
&lt;p>My current approach is almost boring in its simplicity. I write notes when I have something worth recording. I put them in folders that feel natural. I don&amp;rsquo;t worry about perfect categorisation or comprehensive tagging. If I need to find something later, Obsidian&amp;rsquo;s search is good enough. If I can&amp;rsquo;t find it, perhaps it wasn&amp;rsquo;t worth keeping anyway.&lt;/p>
&lt;p>This isn&amp;rsquo;t a manifesto for digital minimalism or a rejection of productivity tools. It&amp;rsquo;s more like a truce. I&amp;rsquo;ve stopped looking for the perfect system because I&amp;rsquo;ve realised it doesn&amp;rsquo;t exist. Instead, I&amp;rsquo;m focusing on developing better habits: writing regularly, thinking clearly, and actually finishing things rather than just organising them.&lt;/p>
&lt;p>The chase for productivity perfection is seductive because it feels like progress. Researching new systems, watching tutorial videos, setting up elaborate workflows — it all feels productive. But it&amp;rsquo;s productivity about productivity, not actual work. It&amp;rsquo;s digital procrastination dressed up as self-improvement.&lt;/p>
&lt;p>Maybe the perfect productivity setup is the one you stop tweaking. The one that gets out of your way and lets you focus on what actually matters: having thoughts worth capturing and doing work worth sharing. Everything else is just infrastructure — necessary, but not noteworthy.&lt;/p>
&lt;p>The real productivity hack might be admitting that no system can save us from ourselves. But a simple one might just get out of the way long enough for us to get something done.&lt;/p></description></item><item><title>2025 Privacy Reboot: Six Month Check-In</title><link>https://blog.dmcc.io/journal/privacy_six_months_checkin/</link><pubDate>Tue, 10 Jun 2025 14:30:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/privacy_six_months_checkin/#2025-12-31</guid><description>&lt;p>Six months ago, I wrote about &lt;a href="https://blog.dmcc.io/journal/2025_my_privacy_reboot/">my privacy reboot&lt;/a> — a gradual shift toward tools that take both privacy and security seriously. It was never about perfection or digital purity, but about intentionality. About understanding which tools serve me, rather than the other way around.&lt;/p>
&lt;p>Here&amp;rsquo;s how it&amp;rsquo;s actually gone.&lt;/p>
&lt;h2 id="the-wins">The Wins&lt;/h2>
&lt;p>&lt;strong>Ente&lt;/strong> continues to impress. The family photo migration is complete, and the service has been rock solid. The facial recognition quirks I mentioned on Android have largely sorted themselves out, and the peace of mind knowing our family memories aren&amp;rsquo;t feeding Google&amp;rsquo;s advertising machine feels worth the subscription cost.&lt;/p>
&lt;p>&lt;strong>Signal&lt;/strong> has been the biggest success story. I&amp;rsquo;m now at about 95% Signal for personal messaging — a number I genuinely didn&amp;rsquo;t think was achievable. The automated WhatsApp replies did their quiet work, and most people made the jump without much fuss. There&amp;rsquo;s still a handful of contacts who simply can&amp;rsquo;t or won&amp;rsquo;t switch, but that&amp;rsquo;s reality, not failure.&lt;/p>
&lt;p>&lt;strong>Migadu&lt;/strong> has the entire family migrated and running smoothly. Moving away from Gmail&amp;rsquo;s tentacles felt daunting, but the transition was surprisingly painless thanks to &lt;code>imapsync&lt;/code>. We own our email again, which feels both quaint and radical in 2025.&lt;/p>
&lt;p>&lt;strong>Mullvad VPN&lt;/strong> remains my travel companion, now with multi-hop enabled for that extra layer of paranoia that may or may not be justified. The fact that I can pay for it anonymously still feels like digital rebellion.&lt;/p>
&lt;h2 id="the-pragmatic-compromises">The Pragmatic Compromises&lt;/h2>
&lt;p>&lt;strong>DuckDuckGo Browser&lt;/strong> works brilliantly for personal use, but I&amp;rsquo;ve had to switch back to Chrome for work. The reality is that some enterprise extensions simply don&amp;rsquo;t exist in DuckDuckGo&amp;rsquo;s ecosystem, and I&amp;rsquo;m not going to handicap my productivity to make a privacy point that only I care about. Work browser for work things, private browser for everything else.&lt;/p>
&lt;p>&lt;strong>Standard Notes&lt;/strong> didn&amp;rsquo;t stick. Despite liking the interface and the privacy-first approach, I found myself gravitating back to Obsidian. The tipping point was building a &lt;a href="https://github.com/dannymcc/Granola-to-Obsidian">Granola-to-Obsidian plugin&lt;/a> that automatically imports my work meeting notes. When your knowledge management system updates itself, convenience wins — even in a privacy-focused setup.&lt;/p>
&lt;h2 id="the-learning-curve">The Learning Curve&lt;/h2>
&lt;p>&lt;strong>Bitwarden&lt;/strong> for 2FA has been excellent, and I&amp;rsquo;ve now gone full circle by self-hosting my own Vaultwarden instance. Access is limited to my Tailscale network for that belt-and-braces security approach. There&amp;rsquo;s something satisfying about controlling your own authentication infrastructure, even if it requires more weekend tinkering than most people would tolerate.&lt;/p>
&lt;p>The email situation has one minor wrinkle: I&amp;rsquo;ve noticed periodic latency spikes with Migadu&amp;rsquo;s SMTP servers. Nothing broken, just the occasional sluggish send. It&amp;rsquo;s the kind of small friction that reminds you why Gmail&amp;rsquo;s infrastructure is so seductive. But it&amp;rsquo;s manageable, and the trade-off still feels worth it.&lt;/p>
&lt;h2 id="the-social-reality">The Social Reality&lt;/h2>
&lt;p>Perhaps the most interesting observation has been about human behaviour. Some contacts would rather message me on Instagram than install Signal — a perfect illustration of how convenience trumps privacy in most people&amp;rsquo;s mental calculus. This isn&amp;rsquo;t a judgment; it&amp;rsquo;s just the reality of how digital habits form and persist.&lt;/p>
&lt;p>The success of the automated WhatsApp responses proved something important: people will adapt to your digital boundaries if you make the friction low enough. Most friends didn&amp;rsquo;t mind installing Signal once the path was clear and frictionless.&lt;/p>
&lt;h2 id="what-ive-learned">What I&amp;rsquo;ve Learned&lt;/h2>
&lt;p>Six months in, this privacy reboot feels less like a radical departure and more like a gentle course correction. The tools I use now are more intentional, more aligned with my values, but they&amp;rsquo;re not perfect. They&amp;rsquo;re just better.&lt;/p>
&lt;p>The biggest insight? Privacy-focused doesn&amp;rsquo;t have to mean productivity-hostile. Most of these changes improved my digital life in ways that had nothing to do with privacy — better focus, fewer distractions, more control over my own data. The privacy benefits were almost a bonus.&lt;/p>
&lt;p>It&amp;rsquo;s also clarified where I&amp;rsquo;m willing to compromise. Work tools live in a different ecosystem than personal ones, and that&amp;rsquo;s fine. Perfect is the enemy of good, and good is far better than surveillance-as-default.&lt;/p>
&lt;p>This might all look different in another six months. Maybe I&amp;rsquo;ll discover new tools, encounter new constraints, or decide that some trade-offs aren&amp;rsquo;t worth it. But right now, this feels sustainable. It feels intentional. And in a world where digital convenience often comes with hidden costs, that intentionality might be the most important thing of all.&lt;/p>
&lt;p>The question isn&amp;rsquo;t whether you can achieve perfect privacy — you can&amp;rsquo;t, and probably shouldn&amp;rsquo;t try. The question is whether you can build a digital life that serves your values while still functioning in the world as it actually exists. Six months in, I think the answer is yes.&lt;/p></description></item><item><title>Focus</title><link>https://blog.dmcc.io/journal/focus/</link><pubDate>Thu, 08 May 2025 14:06:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/focus/#2025-12-31</guid><description>&lt;p>We&amp;rsquo;ve all seen them: those productivity YouTubers with perfectly lit home offices explaining how they maintain &amp;ldquo;deep work&amp;rdquo; for 12+ hours a day. They sit there, looking impossibly serene, selling us a vision of superhuman concentration that I&amp;rsquo;ve come to believe is complete nonsense.&lt;/p>
&lt;p>I used to buy into this. I&amp;rsquo;d feel like a failure when my brain checked out after three solid hours of work. I&amp;rsquo;d push myself to match these claimed productivity marathons, only to end up exhausted and wondering what was wrong with me.&lt;/p>
&lt;p>Here&amp;rsquo;s what I&amp;rsquo;ve figured out through trial and error: our brains just don&amp;rsquo;t work that way. At least mine doesn&amp;rsquo;t. I&amp;rsquo;ve found I can do maybe 4-5 hours of genuine deep focus daily, broken into chunks. Beyond that, I&amp;rsquo;m still mentally effective, but in different ways – more suited to collaboration, communication, and less intensive tasks rather than deep work.&lt;/p>
&lt;p>This insight has changed how I structure my day. Those morning hours of deep focus are sacred – they&amp;rsquo;re when I tackle the most complex problems. This aligns with what Jeff Bezos famously calls his &amp;ldquo;high IQ meeting&amp;rdquo; approach. Bezos schedules his most mentally demanding meetings at 10 a.m., noting that by late afternoon, he&amp;rsquo;s simply not at his cognitive best for challenging problems. As he puts it, &amp;ldquo;By 5 p.m., I&amp;rsquo;m like, &amp;lsquo;I can&amp;rsquo;t think about that today. Let&amp;rsquo;s try this again tomorrow at 10 a.m.'&amp;rdquo;&lt;/p>
&lt;p>Those few hours of real focus, when used well, are worth far more than double the time spent in a semi-distracted state. I&amp;rsquo;ve noticed this in my teammates too – the most valuable contributors aren&amp;rsquo;t the ones logging 12-hour days. They&amp;rsquo;re the ones who bring their full attention to the right problems for shorter, more intense periods.&lt;/p>
&lt;h2 id="the-hardware-trap">The Hardware Trap&lt;/h2>
&lt;p>I&amp;rsquo;ve fallen for every productivity hardware upgrade imaginable. Ultrawide curved monitors. Triple-screen setups. Higher resolutions to fit more windows. Each promising to transform me into some multitasking wizard.&lt;/p>
&lt;p>What I actually bought wasn&amp;rsquo;t productivity – it was distraction disguised as efficiency. Each extra screen became another venue for notifications, another space to fill with Slack, email, and other attention-fracturing tools.&lt;/p>
&lt;p>In 2025, I&amp;rsquo;ve gone the opposite direction: smaller. My main work setup now has just enough screen space for one or two applications. Not because I can&amp;rsquo;t afford bigger – but because the constraint forces me to choose what deserves my attention right now.&lt;/p>
&lt;p>This limitation has oddly become my best focus tool. When I can only comfortably see one document or conversation at a time, I&amp;rsquo;m present with it. The hardware limitation became a feature, not a bug.&lt;/p>
&lt;h2 id="the-ai-partnership">The AI Partnership&lt;/h2>
&lt;p>The other half of my focus shift has been outsourcing the busywork. Where I once tried to juggle multiple applications, I now delegate aggressively to AI tools that handle the administrative stuff that used to scatter my attention.&lt;/p>
&lt;p>My email gets filtered before I see it. Meeting notes get summarised. Research gets compiled. First drafts get generated. This isn&amp;rsquo;t about replacing thinking – it&amp;rsquo;s about eliminating the low-value tasks that constantly pulled me out of flow.&lt;/p>
&lt;p>The result? When I focus, I actually focus. I&amp;rsquo;m not half-writing an email while half-listening to a call. I&amp;rsquo;m fully engaged with the complex work that needs my human judgment.&lt;/p>
&lt;p>This approach – smaller screens plus AI delegation – has completely changed what focus means for me. It&amp;rsquo;s no longer about willpower or duration. It&amp;rsquo;s about creating conditions where concentration happens naturally.&lt;/p>
&lt;p>I&amp;rsquo;m not perfect at this. I still get pulled into the multitasking trap. And let&amp;rsquo;s be honest – many company metrics still force us into equating hours with productivity. These legacy measurements are changing rapidly, but they still influence how we work and how we&amp;rsquo;re evaluated.&lt;/p>
&lt;p>Despite this, I&amp;rsquo;m convinced that genuine focus isn&amp;rsquo;t an endurance sport. It&amp;rsquo;s not about who can stare at code the longest without blinking.&lt;/p>
&lt;p>It&amp;rsquo;s about creating space – physical, digital, and mental – where your best thinking can happen. And sometimes that means working less, but working better. Ultimately, it&amp;rsquo;s about &lt;a href="https://blog.dmcc.io/journal/balance/">recognising what truly deserves our attention&lt;/a> and what can safely be ignored.&lt;/p>
&lt;h2 id="the-privacy-paradox">The Privacy Paradox&lt;/h2>
&lt;p>There&amp;rsquo;s an irony to my current digital life that I&amp;rsquo;ve been thinking about lately. In my &lt;a href="https://blog.dmcc.io/journal/2025_my_privacy_reboot/">last post about privacy&lt;/a>, I wrote about my journey to reclaim control over my data – moving away from surveillance-as-a-service platforms to more secure, private alternatives.&lt;/p>
&lt;p>Yet here I am, advocating for deeper integration with AI tools that, by their very nature, require access to virtually everything I think and do. I&amp;rsquo;m simultaneously pulling back from data-hungry ecosystems while whispering my every thought to at least one AI throughout my day.&lt;/p>
&lt;p>It&amp;rsquo;s a strange contradiction. On one hand, I&amp;rsquo;m meticulously auditing which services can access my photos, notes, and location. On the other, I&amp;rsquo;m willingly feeding my draft emails, meeting notes, and half-formed ideas into systems far more pervasive than anything that came before.&lt;/p>
&lt;p>Perhaps this tension – between protecting our digital boundaries while embracing tools that blur them – is the defining challenge of 2025. We&amp;rsquo;re figuring out which information belongs where, which systems deserve our trust, and where to draw the lines.&lt;/p></description></item><item><title>Trust</title><link>https://blog.dmcc.io/journal/trust/</link><pubDate>Sun, 04 May 2025 19:24:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/trust/#2025-12-31</guid><description>&lt;p>We like to believe we&amp;rsquo;re in control. That privacy is something we can protect if we just check the right boxes, read the fine print, toggle the right settings. But that belief is crumbling. In 2025, privacy isn&amp;rsquo;t something we manage — it&amp;rsquo;s something we quietly surrender, one tap, click, and scroll at a time.&lt;/p>
&lt;p>Lately, I&amp;rsquo;ve been thinking about how much I rely on Google. Not in an abstract way, but in a daily, tangible, everything-I-do-is-somehow-Google-enabled kind of way. Google Photos, for instance, is frictionless. It uploads every picture, recognises every face, remembers the places I&amp;rsquo;ve been, and lets me search through a decade of memories in milliseconds. It&amp;rsquo;s borderline magical. But magic, in the digital world, usually means surveillance. It means giving up control. It means letting a machine learn the people in your life, the patterns of your past, the corners of your history — all in exchange for convenience.&lt;/p>
&lt;p>This isn&amp;rsquo;t just about Google, though. It&amp;rsquo;s not even about Big Tech specifically. It&amp;rsquo;s about the fundamental reality that the modern internet has made privacy optional — and expensive. If you want discounted groceries, you need a loyalty card. If you want smart recommendations, you have to share your behaviour. If you want to board a plane, get a mortgage, download an app, or buy anything online, you&amp;rsquo;re handing over data whether you like it or not. And if you say no? That&amp;rsquo;s fine — but you&amp;rsquo;ll be paying in time, money, or friction.&lt;/p>
&lt;h2 id="privacy-poverty">Privacy Poverty&lt;/h2>
&lt;p>This phenomenon has a name: privacy poverty. The idea that privacy is no longer a right, but a luxury. That those with disposable income can buy out of tracking — pay for encrypted services, private browsers, premium accounts — while everyone else gets a discount in exchange for giving up their digital lives. And that gap is growing. Privacy, like healthcare or education, is becoming another line item on the inequality ledger.&lt;/p>
&lt;p>Even those of us who consider ourselves privacy-conscious eventually give in. I&amp;rsquo;ve been off Facebook for years. I block trackers. I read privacy policies (well, some of them). But my phone still knows everything I do. My bank app still notifies me of every transaction. My travel data still flows through biometric gates. My online purchases still generate behavioural profiles. Surveillance is so deeply embedded in our infrastructure that avoiding it requires opting out of society altogether.&lt;/p>
&lt;h2 id="the-ai-amplifier">The AI Amplifier&lt;/h2>
&lt;p>And then there&amp;rsquo;s AI. AI doesn&amp;rsquo;t just amplify the problem — it warps it. This new generation of systems doesn&amp;rsquo;t just store or index your data, it interprets it. It sees patterns, infers emotions, anticipates behaviour. It turns data into insight, insight into prediction, prediction into influence. The more we feed it, the smarter it gets — and the harder it becomes to remember where convenience ends and control begins. AI accelerates everything: our productivity, our communication, our decision-making — and yes, our exposure.&lt;/p>
&lt;p>What worries me isn&amp;rsquo;t that people are willingly trading privacy for convenience. It&amp;rsquo;s that, more and more, there is no real trade to make. The so-called &amp;ldquo;threshold&amp;rdquo; between trust and convenience isn&amp;rsquo;t a line we cross — it&amp;rsquo;s a condition we live in. Privacy isn&amp;rsquo;t lost in a single moment. It&amp;rsquo;s eroded through a thousand little gestures, most of which don&amp;rsquo;t feel like choices at all.&lt;/p>
&lt;h2 id="what-now">What Now?&lt;/h2>
&lt;p>So what now? Can we reclaim privacy in an ecosystem that treats it as a premium feature? Can regulation catch up fast enough to offer meaningful protection? Or have we already accepted a future in which data collection is the price of participation? These aren&amp;rsquo;t rhetorical questions. They&amp;rsquo;re the stakes of the world we&amp;rsquo;re building — and living in.&lt;/p>
&lt;p>We often say privacy is a basic right. But we rarely act like it. In practice, it&amp;rsquo;s more like a silent agreement: give us everything, and we&amp;rsquo;ll make life easier. Reject that deal, and you&amp;rsquo;re on your own. The truth is, convenience won. Quietly, efficiently, and thoroughly. And if we don&amp;rsquo;t start rethinking the terms, we won&amp;rsquo;t be asking where the line is — we&amp;rsquo;ll be asking if there was ever one at all. For my own attempt at &lt;a href="https://blog.dmcc.io/journal/2025_my_privacy_reboot/">rethinking these terms in practice&lt;/a>, I&amp;rsquo;ve been gradually shifting towards tools that respect both privacy and usability.&lt;/p></description></item><item><title>Balance</title><link>https://blog.dmcc.io/journal/balance/</link><pubDate>Fri, 02 May 2025 08:54:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/balance/#2025-12-31</guid><description>&lt;p>Tucked away in a parenting book I read nearly two decades ago — title and author long lost to time — was a metaphor that lodged itself in my brain and never left.&lt;/p>
&lt;p>&lt;em>&amp;ldquo;Life is a balance, or rather, a juggle of balls. Some are glass. Some are plastic.&amp;quot;&lt;/em>&lt;/p>
&lt;p>The idea is simple but enduring: drop a plastic ball, and it bounces. Drop a glass one, and it shatters. The trick — the real tightrope act — is knowing which is which.&lt;/p>
&lt;p>Here&amp;rsquo;s where it gets interesting. The author warned against falling into the trap of assuming that all the &amp;ldquo;family&amp;rdquo; balls are fragile crystal and all the &amp;ldquo;career&amp;rdquo; balls are rubber. Sometimes, the opposite is true. A moment missed with your kids might be one that easily bounces back. But a career opportunity? It might not.&lt;/p>
&lt;p>Other times, of course, it flips. That bedtime story or awkward teenage conversation might be the glass ball you didn&amp;rsquo;t realise you were holding. Meanwhile, some of those high-stakes work pressures turn out to be surprisingly… elastic. This same principle applies to &lt;a href="https://blog.dmcc.io/journal/focus/">how we choose to spend our mental energy&lt;/a> throughout each day.&lt;/p>
&lt;p>It&amp;rsquo;s a perspective that&amp;rsquo;s stayed with me — not because it offers perfect clarity, but because it grants permission. Permission to drop a few balls. Permission to be wrong about which ones matter most. And above all, permission to course-correct when you&amp;rsquo;ve mistakenly let a glass one slip through your fingers.&lt;/p>
&lt;p>I&amp;rsquo;ve dropped a few glass balls over the years. I&amp;rsquo;d be lying if I said otherwise. But I&amp;rsquo;ve also learned to recognise the ones that bounce. That small wisdom has made all the difference.&lt;/p></description></item><item><title>100 Days of Writing</title><link>https://blog.dmcc.io/journal/100-days-of-writing/</link><pubDate>Wed, 30 Apr 2025 19:12:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/100-days-of-writing/#2025-12-31</guid><description>&lt;p>Is there some magic in writing every day for 100 days? Maybe. Maybe not.&lt;/p>
&lt;p>But that&amp;rsquo;s not quite the right question. A better one might be: &lt;em>What would I hope to get out of writing every day for 100 days?&lt;/em>&lt;/p>
&lt;p>For starters, I&amp;rsquo;d get better at clarity — saying what I mean without losing the thread halfway through. I&amp;rsquo;d build speed: less dithering, more straight-from-brain-to-fingers. And maybe, just maybe, I&amp;rsquo;d find a rhythm. Writing not as a task, but as a mindfulness habit. A check-in. A creative exhale. Much like &lt;a href="https://blog.dmcc.io/journal/focus/">developing better focus in other areas&lt;/a>, it&amp;rsquo;s about creating intentional space for what matters.&lt;/p>
&lt;p>It&amp;rsquo;s hard to see the downside. There are no rules here, no format police. Some days it might be a deep-dive into something technical. Others, a meandering thought about life, or a single paragraph admitting I&amp;rsquo;m not really feeling it today. It all counts.&lt;/p>
&lt;p>So here&amp;rsquo;s Day One. I&amp;rsquo;ll read this again in 100 days and see just how wide-eyed I was starting out.&lt;/p>
&lt;p>Let&amp;rsquo;s find out. 👋&lt;/p></description></item><item><title>2025: My Privacy Reboot</title><link>https://blog.dmcc.io/journal/2025_my_privacy_reboot/</link><pubDate>Wed, 30 Apr 2025 08:01:00 +0100</pubDate><guid>https://blog.dmcc.io/journal/2025_my_privacy_reboot/#2025-12-31</guid><description>&lt;div class="post-callout">
&lt;div class="post-callout-header">
Six Month Update
&lt;/div>
&lt;p>Curious how this privacy reboot actually worked out? I wrote a detailed follow-up after six months of living with these changes — covering what worked, what didn't, and the pragmatic compromises along the way.&lt;/p>
&lt;a href="https://blog.dmcc.io/journal/privacy_six_months_checkin/" class="post-callout-link">Read the Six Month Check-In →&lt;/a>
&lt;/div>
&lt;p>The line between &lt;em>privacy&lt;/em> and &lt;em>security&lt;/em> isn&amp;rsquo;t always clear — and in tech, it&amp;rsquo;s often treated like they&amp;rsquo;re the same thing. But they&amp;rsquo;re not. Even the broader question of &lt;a href="https://blog.dmcc.io/journal/trust/">when to trust digital services&lt;/a> with our data has become increasingly complex.&lt;/p>
&lt;p>Security is about control — making sure only authorised people (read: me) can access my data. Privacy is different. It&amp;rsquo;s about &lt;em>intent&lt;/em>. It dictates how that data gets used, and whether the companies that hold it stick to the promises they made.&lt;/p>
&lt;p>Over the past few months, I&amp;rsquo;ve been re-evaluating the tools I use daily, nudging my setup toward services that take both privacy and security seriously. This isn&amp;rsquo;t a call to ditch Big Tech or a manifesto for full decentralisation. It&amp;rsquo;s more like a timestamp — a snapshot of the tools I trust &lt;em>right now&lt;/em>, and a few notes on why.&lt;/p>
&lt;h2 id="duckduckgo-browser">DuckDuckGo Browser&lt;/h2>
&lt;p>My default browser — but with all the built-in VPN and AI features turned off. I&amp;rsquo;m mostly using it for its cookie blocking, which, if I&amp;rsquo;m honest, is less about privacy and more about not wanting to tap &lt;em>&amp;ldquo;accept all cookies&amp;rdquo;&lt;/em> for the thousandth time.&lt;/p>
&lt;h2 id="standard-notes">Standard Notes&lt;/h2>
&lt;p>My go-to for note-taking — and, more recently, blogging. I&amp;rsquo;ve started using the Listed.to publishing platform they offer, which simplifies getting thoughts online. I&amp;rsquo;m not completely sold on it yet, but there&amp;rsquo;s something refreshing about hitting &amp;ldquo;publish&amp;rdquo; without fiddling with formatting or plugins.&lt;/p>
&lt;h2 id="ente-for-photo-storage">Ente for Photo Storage&lt;/h2>
&lt;p>I&amp;rsquo;ve started trialling Ente as an encrypted alternative to Google Photos, signing up for the 200GB family plan to test it with my wife. So far, it&amp;rsquo;s been impressive. The desktop importer handled 13,000+ images from Google Takeout without breaking a sweat — just drop in the .zip files and let it churn.&lt;/p>
&lt;p>One quirk: machine learning features like facial recognition seem better tuned for iOS than Android (I&amp;rsquo;m on a Pixel 9 Pro). I&amp;rsquo;ve noticed the occasional metadata hiccup, but I&amp;rsquo;m holding off judgement until the full import is done.&lt;/p>
&lt;h2 id="signal-as-a-whatsapp-alternative">Signal as a WhatsApp Alternative&lt;/h2>
&lt;p>I&amp;rsquo;ve shifted all personal messaging to Signal, and put WhatsApp on digital life support. To ease the transition, I&amp;rsquo;ve set up an automated reply using &lt;strong>Whatauto&lt;/strong> for Android. It runs silently in the background, while I keep read receipts off and notifications disabled. Most of my regular contacts have now made the jump to Signal — a direct result of the automation doing its quiet work.&lt;/p>
&lt;p>Here&amp;rsquo;s the message they get:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">Auto Reply
Hi, thanks for your message. I&amp;#39;m no longer actively checking WhatsApp as I&amp;#39;ve moved to Signal for messaging.
https://signal.org/install
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="migadu--thunderbird-for-email">Migadu + Thunderbird for Email&lt;/h2>
&lt;p>I&amp;rsquo;ve moved my family&amp;rsquo;s email accounts from Gmail (and briefly Fastmail) to Migadu, using &lt;strong>Thunderbird&lt;/strong> across macOS and Android. It&amp;rsquo;s been rock solid. Migration was painless thanks to imapsync, which — as always — just works.&lt;/p>
&lt;h2 id="mullvad-vpn">Mullvad VPN&lt;/h2>
&lt;p>I don&amp;rsquo;t use Mullvad every day, but it&amp;rsquo;s my default when travelling. Either directly on-device or routed through my Beryl travel router. I like its multi-hop and anti-AI-analysis features, and the fact it doesn&amp;rsquo;t require an email address to sign up still feels radical.&lt;/p>
&lt;h2 id="bitwarden-for-2fa">Bitwarden for 2FA&lt;/h2>
&lt;p>I recently replaced Authy with Bitwarden for managing two-factor authentication. The breaking point was Authy&amp;rsquo;s decision to shut down API access — which killed off my favourite use case: pulling codes directly into Raycast for lightning-fast logins. Bitwarden doesn&amp;rsquo;t offer quite the same UX flow, but it&amp;rsquo;s transparent, trusted, and functional.&lt;/p>
&lt;p>These changes aren&amp;rsquo;t about perfection. They&amp;rsquo;re about &lt;em>intentionality&lt;/em> — understanding the tools I rely on and making sure they serve me, not the other way around. This mindset extends beyond privacy tools into &lt;a href="https://blog.dmcc.io/journal/focus/">how I approach focus and productivity&lt;/a> as well.&lt;/p>
&lt;p>This might all look different in six months. Maybe I&amp;rsquo;ll shift back. Maybe I&amp;rsquo;ll dig deeper. But right now, this is the version of digital minimalism that makes the most sense for how I live and work. It&amp;rsquo;s all about &lt;a href="https://blog.dmcc.io/journal/balance/">recognising what deserves attention&lt;/a> and what doesn&amp;rsquo;t.&lt;/p>
&lt;div class="post-callout">
&lt;div class="post-callout-header">
Six Month Update
&lt;/div>
&lt;p>Curious how this privacy reboot actually worked out? I wrote a detailed follow-up after six months of living with these changes — covering what worked, what didn't, and the pragmatic compromises along the way.&lt;/p>
&lt;a href="https://blog.dmcc.io/journal/privacy_six_months_checkin/" class="post-callout-link">Read the Six Month Check-In →&lt;/a>
&lt;/div></description></item><item><title>Replacing Google Photos with Immich</title><link>https://blog.dmcc.io/journal/2024-immich-and-docker/</link><pubDate>Sat, 06 Apr 2024 21:13:34 +0100</pubDate><guid>https://blog.dmcc.io/journal/2024-immich-and-docker/#2025-12-28</guid><description>&lt;p>I have, for a long time, been looking for a better alternative to Google Photos. Although Google Photos does exactly what I want, and isn&amp;rsquo;t that expensive, I do often consider the fact that all of my photos are in Google&amp;rsquo;s hands. I did move to Synology Photos a few years ago. The move itself was straight forward enough, but the user experience leaves quite a lot to be desired.&lt;/p>
&lt;p>Anyone who has spent more than 5 minutes on any of Reddit&amp;rsquo;s self hosting and open source subreddits will have heard of Immich before. It&amp;rsquo;s open source, somehow done entirely by a sole developer, and has an impressive iOS app that matches Google Photos in feature set.&lt;/p>
&lt;p>So, the first step was to take Immich for a spin and see how it performs. I followed the Immich docker compose &lt;a href="https://immich.app/docs/install/docker-compose/">documentation steps&lt;/a> and it was basically effortless.&lt;/p>
&lt;h3 id="google-takeout">Google Takeout&lt;/h3>
&lt;p>Of course, Immich is only useful if it has my photos in it. Those photos are in Google&amp;rsquo;s hands at the moment. Thankfully, Google provide a very easy way to download all of my photos using &lt;a href="https://support.google.com/accounts/answer/3024190?hl=en">Google Takeout&lt;/a>.&lt;/p>
&lt;p>Once I got the Google Takeout email from Google and downloaded them all, I used the &lt;a href="https://github.com/TheLastGimbus/GooglePhotosTakeoutHelper/tree/master">GooglePhotosTakeoutHelper&lt;/a> library to get the metadata from the Google JSON file and merge them into the image files themselves.&lt;/p>
&lt;p>I then simply took the ALL_PHOTOS directory and uploaded everything via the Immich frontend. There are plenty of options available for the upload step, but I was happy with the simplest method of using the frontend uploader.&lt;/p>
&lt;h3 id="performance">Performance&lt;/h3>
&lt;p>Immich doesn&amp;rsquo;t need much in the way of resources to run the day-to-day operations of the photo library. Where I started to hit performance issues was during the initial import. Immich has a number of &amp;lsquo;jobs&amp;rsquo; that are undertaken when it detects new images. One of these jobs is the &amp;lsquo;detect faces&amp;rsquo; job. This does exactly what you think it does. It uses the machine learning docker container to check if the image has any faces in it, and if so, marks them as faces and passes the information to the Immich microservice container which then attempts to identify who&amp;rsquo;s face has been identified.&lt;/p>
&lt;p>As with most machine learning processes, this one is resource intensive.&lt;/p>
&lt;p>Impressively, and equally helpfully, Immich has the ability to &lt;a href="https://immich.app/docs/guides/remote-machine-learning/">offload this machine learning process&lt;/a> to another hardware stack. In my case, it&amp;rsquo;s simply my Macbook Pro (M2).&lt;/p>
&lt;p>To get this working, I used the following docker-compose.yml file on my Macbook:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#
name: immich
services:
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
file: hwaccel.ml.yml
service: cpu # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
env_file:
- .env
restart: always
ports:
- &amp;#34;3003:3003&amp;#34;
volumes:
model-cache:
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then, after a simple &lt;code>docker compose up -d&lt;/code> I then had the machine learning container running on my laptop. All I had to do then, was change the Immich setting to use the remote container.&lt;/p>
&lt;p>&lt;img src="https://blog.dmcc.io/immich-remote-machine-learning.png" alt="Immich remote machine learning settings">&lt;/p>
&lt;p>The effect was immediate. My laptop was able to run through 5,000 images in around 30 minutes. For comparison it took around 10 minutes per image on the original virtual machine (running in Proxmox).&lt;/p>
&lt;h3 id="summary">Summary&lt;/h3>
&lt;p>So far I&amp;rsquo;m pretty impressed with Immich. Let&amp;rsquo;s see how long I stick with it!&lt;/p></description></item><item><title>2024 macOS Dotfiles</title><link>https://blog.dmcc.io/journal/2024-macos-dotfiles/</link><pubDate>Wed, 27 Dec 2023 14:57:42 +0000</pubDate><guid>https://blog.dmcc.io/journal/2024-macos-dotfiles/#2025-12-28</guid><description>&lt;p>It is the time of year again where I decide to update my local computer configuration, as well as any remote linux server(s) that I am maintaining. I really appreciate having a familiar prompt and alias setup whenever I login to any of my servers/workstations.&lt;/p>
&lt;p>As per usual, I cannot remember which specific packages and plugins I use; so I&amp;rsquo;ve am using this post for future me to discover how I actually configured my environments.&lt;/p>
&lt;h2 id="oh-my-zsh">oh-my-zsh&lt;/h2>
&lt;p>ZSH is a must for me now, after using it for many years now. This is a simple process which can be read at &lt;a href="https://ohmyz.sh/">https://ohmyz.sh/&lt;/a> but for reference, the following command will install oh-my-zsh:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">sh -c &amp;#34;$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)&amp;#34;
&lt;/code>&lt;/pre>&lt;/div>&lt;p>There are a few changes I then made to the &lt;code>.zshrc&lt;/code> file, which I&amp;rsquo;ve listed below. You will recieve errors when using these changes until you complete the remaining steps listed below.&lt;/p>
&lt;p>Add or update the following lines of &lt;code>.zshrc&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">ZSH_THEME=&amp;#34;powerlevel10k/powerlevel10k&amp;#34;
&lt;/code>&lt;/pre>&lt;/div>&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">plugins=(git tmux)
&lt;/code>&lt;/pre>&lt;/div>&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback"># Zplug Section
source ~/.zplug/init.zsh
zplug &amp;#34;zsh-users/zsh-autosuggestions&amp;#34; # Auto suggest command completions as you type
zplug &amp;#34;zsh-users/zsh-syntax-highlighting&amp;#34;, from:github # Command syntax highlighting
zplug &amp;#34;RobertAudi/tsm&amp;#34; # tmux session manager. Handy short hand for tmux session management
zplug &amp;#34;bobsoppe/zsh-ssh-agent&amp;#34;, use:ssh-agent.zsh, from:github # Setup the ssh agent
if ! zplug check; then
zplug install
fi
zplug load
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="oh-my-tmux">oh-my-tmux&lt;/h2>
&lt;p>Another ready to go package is oh-my-tmux. Again, this is plug and play from &lt;a href="https://github.com/gpakosz/.tmux">https://github.com/gpakosz/.tmux&lt;/a>&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">$ cd
$ git clone https://github.com/gpakosz/.tmux.git
$ ln -s -f .tmux/.tmux.conf
$ cp .tmux/.tmux.conf.local .
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="powerlevel10k">Powerlevel10k&lt;/h2>
&lt;p>Next up is Powerlevel10k which allows very easy prompt adjustments and customisation from &lt;a href="https://github.com/romkatv/powerlevel10k/tree/master">https://github.com/romkatv/powerlevel10k/tree/master&lt;/a>&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
&lt;/code>&lt;/pre>&lt;/div>&lt;p>I have found it very helpful to include &lt;code>context&lt;/code> to the prompt for remote servers and include the OS icon. Both of these are described well on the Powerlevel10k wiki (&lt;a href="https://github.com/romkatv/powerlevel10k/blob/master/README.md#how-do-i-add-username-andor-hostname-to-prompt)">https://github.com/romkatv/powerlevel10k/blob/master/README.md#how-do-i-add-username-andor-hostname-to-prompt)&lt;/a>.&lt;/p>
&lt;h2 id="zplug">ZPlug&lt;/h2>
&lt;p>Finally we have zplug which allows ZSH plugin management easily from &lt;a href="https://github.com/zplug/zplug">https://github.com/zplug/zplug&lt;/a>. This, combined with the above &lt;code>.zshrc&lt;/code> adjustments, makes for a great prompt experience.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">curl -sL --proto-redir -all,https https://raw.githubusercontent.com/zplug/installer/master/installer.zsh | zsh
&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="figurine-optional">Figurine (optional)&lt;/h2>
&lt;p>When switching between servers reguarly, I have recently been finding it very useful to see the name of the server in large type as soon as I login. To assist with this I am using figurine: &lt;a href="https://github.com/arsham/figurine">https://github.com/arsham/figurine&lt;/a>&lt;/p>
&lt;p>I install figurine and then just add the following to the top of my &lt;code>.zshrc&lt;/code> file.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">echo &amp;#34;&amp;#34;
figurine -f &amp;#34;3d.flf&amp;#34; Server Name
echo &amp;#34;&amp;#34;
&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>Running Powershell Script an Elevated User</title><link>https://blog.dmcc.io/journal/running-powershell-script-as-elevated-user/</link><pubDate>Sun, 29 Jan 2023 22:29:22 +0000</pubDate><guid>https://blog.dmcc.io/journal/running-powershell-script-as-elevated-user/#2025-12-28</guid><description>&lt;p>When running a powershell script, I often find I need to run the script in an elevated prompt. The nature of my job is that often these scripts will be run by people that don&amp;rsquo;t really know what Powershell is.&lt;/p>
&lt;p>I have found it quite useful to first create a bash script that the user executes, which in turn calls the actual Powershell script as an elevated user.&lt;/p>
&lt;p>To keep this handy, I&amp;rsquo;m posting it here for future me.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">Powershell.exe -Command &lt;span style="color:#4070a0">&amp;#34;&amp;amp; {Start-Process Powershell.exe -ArgumentList &amp;#39;-ExecutionPolicy Bypass -File %~dp0actualpowershellscript.ps1&amp;#39; -Verb RunAs}&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>I then save the bash script as something obvious such as &lt;code>run-me-first.bat&lt;/code>.&lt;/p>
&lt;p>Simple, but it seems to work well!&lt;/p></description></item><item><title>Extending Unraid VM Storage</title><link>https://blog.dmcc.io/journal/extending-unraid-vm-storage/</link><pubDate>Sun, 29 Jan 2023 22:09:40 +0000</pubDate><guid>https://blog.dmcc.io/journal/extending-unraid-vm-storage/#2025-12-28</guid><description>&lt;p>More and more I find myself quickly spinning up a new Windows VM on my unraid server. It is always a &amp;lsquo;temporary&amp;rsquo; VM which, after setting up exactly how I like, I invariably then wish I&amp;rsquo;d set a much larger virtual disk size. The standard VM disk size is 30G and that always seems to be enough.&lt;/p>
&lt;p>Fast forward an hour or two and I really wish I had set something more realistic. The problem is, the vdisk is too small, and Windows configures the partitions in such a way that I can&amp;rsquo;t simply extend the partition (due to the recovery partition being the last partition in the sequence).&lt;/p>
&lt;p>As a reminder for my future self, this is actually quite simple to resolve.&lt;/p>
&lt;p>First, login to the unraid via SSH and issue the following command to add 50 GB to the vdisk:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">qemu-img resize /mnt/user/domains/Temporary-VM-Name/vdisk1.img +50G
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This will magically add an additional 50 GB of capacity to the VM.&lt;/p>
&lt;p>After restarting the VM via the unraid GUI, the second (and final!) step of the process is to expand the partition to use the newly added unallocated space.&lt;/p>
&lt;p>Usually, this could be done via the Disk Management console in Windows, however, as mentioned, this isn&amp;rsquo;t possible with the standard partition layout.&lt;/p>
&lt;p>Thankfully a free tool called &amp;lsquo;MiniTool Partition Wizard` that makes this final step a breeze. Simply open the application, right click on the C drive and click expand. Use the slider to set the full capacity of the unallocated space and then click Apply. It&amp;rsquo;s as simple as that!&lt;/p></description></item><item><title>Tmux exit current session, not Tmux itself</title><link>https://blog.dmcc.io/journal/tmux-exit-session-not-tmux/</link><pubDate>Tue, 11 Aug 2020 11:45:49 +0200</pubDate><guid>https://blog.dmcc.io/journal/tmux-exit-session-not-tmux/#2025-12-28</guid><description>&lt;p>When accessing remote servers that I am responsible for, I always initiate a tmux session along with the SSH session. This means I am always in a tmux session and I will never forget to start a tmux session manually. There is something particurly frustrating about starting a process on a remote server only to realise that you I forgot to start a tmux session and the process is going to take &amp;gt; 1 hour. I have to ensure my SSH session remains open, which isn&amp;rsquo;t always easy if I&amp;rsquo;m moving around.&lt;/p>
&lt;p>I did write a short post recently on how to achieve &lt;a href="https://blog.dmcc.io/posts/starting-tmux-automagically-when-connecting-to-ssh/">a tmux session with every SSH session&lt;/a> but the next problem is when it comes to exit the SSH session when I&amp;rsquo;m done - I don&amp;rsquo;t want it to actually close the tmux session, I want it to close the SSH session but leave the tmux session intact and running.&lt;/p>
&lt;p>The solution is incredibly simple.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">exit&lt;span style="color:#666">()&lt;/span> &lt;span style="color:#666">{&lt;/span>
&lt;span style="color:#007020;font-weight:bold">if&lt;/span> &lt;span style="color:#666">[[&lt;/span> -z &lt;span style="color:#bb60d5">$TMUX&lt;/span> &lt;span style="color:#666">]]&lt;/span>; &lt;span style="color:#007020;font-weight:bold">then&lt;/span>
&lt;span style="color:#007020">builtin&lt;/span> &lt;span style="color:#007020">exit&lt;/span>
&lt;span style="color:#007020;font-weight:bold">else&lt;/span>
tmux detach
&lt;span style="color:#007020;font-weight:bold">fi&lt;/span>
&lt;span style="color:#666">}&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The above function should be added to your &lt;code>.bashrc&lt;/code> file (or &lt;code>.zsh&lt;/code>) and then you need to run &lt;code>source ~/.bashrc&lt;/code> for it to take effect. Remember that this function needs to be on the remote server, not your local workstation.&lt;/p>
&lt;p>Now when we are inside a tmux session and we type &lt;code>exit&lt;/code> the SSH session will close but the tmux session will remain ready for us to reconnect later on.&lt;/p></description></item><item><title>Hetzner Installimage &amp; Ubuntu</title><link>https://blog.dmcc.io/journal/hetzner-installimage-ubuntu/</link><pubDate>Mon, 10 Aug 2020 18:32:11 +0200</pubDate><guid>https://blog.dmcc.io/journal/hetzner-installimage-ubuntu/#2025-12-28</guid><description>&lt;p>I finally had chance to get a dedicated server at Hetzner for my own projects and use. No client requirements, no deadlines and no specification requirements. Seems like a simple thing but just about every &amp;lsquo;proper&amp;rsquo; server I have ever worked on has been for a client or work project. Now that I had access to my own server, it was time to configure it exactly how I wanted.&lt;/p>
&lt;p>As usual, I decided to use Ubuntu - I&amp;rsquo;m familiar with it and I really don&amp;rsquo;t mind the bloat that could be avoided with the likes of CentOS or Arch. Seeing as the server is only for me, I figured I could do what I wanted.&lt;/p>
&lt;p>The first step on any Hetzner bare metal server is to install the OS. To do this I enabled the &lt;a href="https://docs.hetzner.com/robot/dedicated-server/operating-systems/installimage/">Installimage&lt;/a> function on the server and rebooted. As the server I have has three hard drives (more on this later) I needed to ensure they were configured in the way I wanted. There are three hard drives but two of them are slower/larger. I wanted to make sure I used the smaller, faster SSD as the boot drive. Hetners Installimage system makes this fairly easy - you just edit a text config file.&lt;/p>
&lt;p>The example given by Hetzner is the following (this is just the first four lines of a significantly larger config file):&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#60a0b0;font-style:italic"># SSDSC2BB480G4&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic">#DRIVE1 /dev/sda&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># SSDSC2BB480G4&lt;/span>
DRIVE1 /dev/sdb
&lt;/code>&lt;/pre>&lt;/div>&lt;p>In reality, I have another drive and two of them are HDD not SSD. My config looked like this:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#60a0b0;font-style:italic"># SSDSC2BB480G4&lt;/span>
DRIVE1 /dev/sda
&lt;span style="color:#60a0b0;font-style:italic"># HDDSC220000G4&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic">#DRIVE2 /dev/sdb&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># HDDSC220000G4&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic">#DRIVE3 /dev/sdc&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The important thing to note is that I have commented out all drives apart from the smaller SSD drive. This way, when Ubuntu is installed, it will only detect one drive and use this as the boot drive. I can then add the other two drives later on.&lt;/p>
&lt;p>The final things to change in this configuration file are the &lt;code>SWRAID&lt;/code> option and the hostname. The software RAID option needs to be disabled (just comment out the line) and the hostname can be anything you wish.&lt;/p>
&lt;p>Then we close the editor (just press F10) and it will start to process the configuration file and check for errors.&lt;/p>
&lt;p>&lt;img src="https://docs.hetzner.com/static/66131b099c9572b9ad526cc82f64c328/b1cde/Installimage_done.png" alt="Installimage processing">&lt;/p>
&lt;p>Reboot and the process is complete!&lt;/p>
&lt;p>Now, we have Ubuntu installed and the server is up and running with the SSD drive as the boot drive.&lt;/p>
&lt;p>The very first thing to do on the new server is to configure the additional hard drives. To do this, we first need to know which device name each drive has.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback"># ls /dev/sd/*
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdc
&lt;/code>&lt;/pre>&lt;/div>&lt;p>We can see from the exerpt above that we have &lt;code>/dev/sda&lt;/code> which has two partitions (&lt;code>sda1&lt;/code> and &lt;code>sda2&lt;/code>) and we have two additional drives with no partitions (&lt;code>sdb&lt;/code> and &lt;code>sdc&lt;/code>).&lt;/p>
&lt;p>In my server both of these additional drives are the same size and type, so it doesn&amp;rsquo;t matter to me which one I add first.&lt;/p>
&lt;p>The next step, now that we know the device names, is to format the drive(s).&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback"># fdisk /dev/sdb
...
Command (m for help):
&lt;/code>&lt;/pre>&lt;/div>&lt;p>We need to confirm there are no existing partitions on the server usually, but I know this is a new drive (new to me at least) and I don&amp;rsquo;t care if anything on the drive is lost, which it will be!&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">Command (m for help): p
&lt;/code>&lt;/pre>&lt;/div>&lt;p>We see an output of the partition table for this drive, but again, we don&amp;rsquo;t care. So, onwards to creating the new partition the size of the full disk:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
...
&lt;/code>&lt;/pre>&lt;/div>&lt;p>You will next be prompted to enter the &amp;lsquo;first sector&amp;rsquo; and the &amp;lsquo;last sector&amp;rsquo;, again, as this is a new drive we can accept the defaults.&lt;/p>
&lt;p>Finally, we use the &lt;code>w&lt;/code> command to write the changes to the disk. We then repeat the steps on the second drive (`/dev/sdc).&lt;/p>
&lt;p>Ok, no we need to create the file system on the newly created partition(s).&lt;/p>
&lt;p>xfsprogs is the go-to tool for this for me, it&amp;rsquo;s so simple it&amp;rsquo;s practically impossible to mess it up - which is perfect for a personal server I don&amp;rsquo;t want to spend my spare time fixing.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback"># mkfs.xfs /dev/sdb1
&lt;/code>&lt;/pre>&lt;/div>&lt;p>That&amp;rsquo;s it - we&amp;rsquo;ve just created an XFS filesystem on the newly partitioned drive. Repeat the same step for the other drive(s).&lt;/p>
&lt;p>So, let&amp;rsquo;s recap. We&amp;rsquo;ve installed Ubuntu, we&amp;rsquo;ve got the SSD drive as the boot partition and now we have two additional drives partitioned and formatted as XFS filesystems. So, next we need to actually mount the additional drives to the OS.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback"># mkdir /drives
# mkdir /drives/drive1
# mount /dev/sdb1 /drives/drive1
&lt;/code>&lt;/pre>&lt;/div>&lt;p>As you can see from above, I mounted my drive(s) to /drives/drive1 and then /drives/drive2 for the other drive. This makes sense for me, you can choose whatever directory you wish for yours.&lt;/p>
&lt;p>The drives are mounted, which we can confirm with the mount command &lt;code>mount&lt;/code> but this is only for the current session. As soon as the server is rebooted these mounts will drop off. So, we need to make this permenant.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">sudo nano /etc/fstab
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then we need to add the following two lines:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">/dev/sdb1 /drives/drive1 xfs defaults 0 0
/dev/sdc1 /drives/drive2 xfs defaults 0 0
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now the drives will be mounted at boot so they&amp;rsquo;ll always be available.&lt;/p>
&lt;p>Final Note: If you add any more than two additional drives (so the total number of drives exceeds 3) then you will have issues using device names for the mount process. You should use the drive UUID instead. When using the UUID you can be certain that the mounts will always work and the drives won&amp;rsquo;t get a different mount name each time you boot.&lt;/p></description></item><item><title>Tmux 3.0a Configuration Not Loading</title><link>https://blog.dmcc.io/journal/tmux-3-configuration-not-loading/</link><pubDate>Mon, 10 Aug 2020 18:23:31 +0200</pubDate><guid>https://blog.dmcc.io/journal/tmux-3-configuration-not-loading/#2025-12-28</guid><description>&lt;p>The latest version of Ubuntu has recently been &lt;a href="https://ubuntu.com/blog/infographic-ubuntu-from-2004-to-20-04-lts">released&lt;/a> (20.04 LTS) and along with it comes the latest version of tmux.&lt;/p>
&lt;p>Tmux is now at release 3.0a and this version is pre-installed with Ubuntu 20.04 LTS. The problem is, when you want to import a tmux configuration file from tmux 2.9 or below, you get many errors.&lt;/p>
&lt;p>The fix for me was simple&amp;hellip;&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-fallback" data-lang="fallback">tmux kill-server
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Once I killed all old sessions and stopped the tmux server the new tmux version loaded the existing configuration file without any issue or errors.&lt;/p>
&lt;p>Seems simple but it took me a few minutes to work it out.&lt;/p></description></item><item><title>Booting Raspberry Pi 4 From USB</title><link>https://blog.dmcc.io/journal/booting-raspberry-pi-4-from-usb/</link><pubDate>Wed, 13 May 2020 15:47:26 +0100</pubDate><guid>https://blog.dmcc.io/journal/booting-raspberry-pi-4-from-usb/#2025-12-28</guid><description>&lt;p>I recently purchased another Raspberry Pi 4 but this time I wanted to use Ubuntu 20.04 and I wanted to use a USB 3 1TB external hard drive as the boot disk. The reason for using a large boot disk is mainly to avoid SD card corruptions in future as all read/writes (once booted) will be on the external USB drive not on the SD card.&lt;/p>
&lt;p>The first step is install Ubuntu 20.04 onto an SD card. Connect the USB hard drive to the Raspberry Pi 4 into one of the USB 3 ports and boot the Pi.&lt;/p>
&lt;p>On first login on Ubuntu 20.04 the username is &lt;code>ubuntu&lt;/code> and the password is &lt;code>ubuntu&lt;/code>. You will be prompted to change the password the first time you boot.&lt;/p>
&lt;p>SSH in to the Pi (after discovering its IP address via DHCP) and run the following command:&lt;/p>
&lt;p>&lt;code>sudo fdisk -l&lt;/code>&lt;/p>
&lt;p>You will be shown a list of all partitions. As this is a standard Pi install with an SD card and an additional USB device, my partitions are as follows:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">/dev/mmcblk0p2 &lt;span style="color:#666">=&lt;/span> SD Card
/dev/sda &lt;span style="color:#666">=&lt;/span> External USB Drive
&lt;/code>&lt;/pre>&lt;/div>&lt;p>We need to set the partition table structure on the external drive &lt;code>/dev/sda&lt;/code>.&lt;/p>
&lt;p>&lt;code>sudo fdisk /dev/sda&lt;/code>&lt;/p>
&lt;p>Type &lt;code>p&lt;/code> to see a list of partitions.&lt;/p>
&lt;p>Then type &lt;code>d&lt;/code> to delete the primary partition.&lt;/p>
&lt;p>Now we need to create a new partition. Type &lt;code>n&lt;/code> to create the partition, followed by &lt;code>p&lt;/code> to set the new partition as primary and then choose &lt;code>1&lt;/code> as the partition number.&lt;/p>
&lt;p>You can press Enter twice to accept the suggested parition start and end blocks.&lt;/p>
&lt;p>Finally, type &lt;code>w&lt;/code> to write the changes to the disk.&lt;/p>
&lt;p>Now the USB drive has the correct partition structure we need to format the new partition that we just created at &lt;code>/dev/sda1&lt;/code>.&lt;/p>
&lt;p>&lt;code>sudo mkfs.ext4 /dev/sda1&lt;/code>&lt;/p>
&lt;p>This may take a while, but no more than a couple of minutes if this is a fresh installation and the drive is smaller than 2TB (which it should be otherwise we need a different partition setup).&lt;/p>
&lt;p>We need to make a directory where we will mount this USB drive to.&lt;/p>
&lt;p>&lt;code>sudo mkdir /media/externalbootUSB&lt;/code>&lt;/p>
&lt;p>The directory name can be anything you want. Next we need to mount the partition to this new directory.&lt;/p>
&lt;p>&lt;code>sudo mount /dev/sda1 /media/externalbootUSB&lt;/code>&lt;/p>
&lt;p>Now that the partition is mounted to our new directory, we can copy all of the SD card files to the USB drive partition.&lt;/p>
&lt;p>&lt;code>sudo rsync -avx / /media/externalbootUSB&lt;/code>&lt;/p>
&lt;p>This took me around 3 minutes from a 32GB SD card to a 1TB USB 3 external drive.&lt;/p>
&lt;p>Now we need to set the SD card to boot to the USB drive rather than itself.&lt;/p>
&lt;p>As far as I know, on everything other than Ubuntu 20.04 and Raspberry Pi 4 this should be done in &lt;code>/boot/cmdlinetxt&lt;/code>. On Ubuntu 20.04 and a Raspberry Pi 4 I had to do the following:&lt;/p>
&lt;p>&lt;code>sudo nano /boot/firmare/cmdline.txt&lt;/code>&lt;/p>
&lt;p>Paste the following line (commenting out the existing line):&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">net.ifnames&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#40a070">0&lt;/span> dwc_otg.lpm_enable&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#40a070">0&lt;/span> &lt;span style="color:#bb60d5">console&lt;/span>&lt;span style="color:#666">=&lt;/span>serial0,115200 &lt;span style="color:#bb60d5">console&lt;/span>&lt;span style="color:#666">=&lt;/span>tty1 &lt;span style="color:#bb60d5">root&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#bb60d5">LABEL&lt;/span>&lt;span style="color:#666">=&lt;/span>writable &lt;span style="color:#bb60d5">rootfstype&lt;/span>&lt;span style="color:#666">=&lt;/span>ext4 &lt;span style="color:#bb60d5">elevator&lt;/span>&lt;span style="color:#666">=&lt;/span>deadline rootwait fixrtc &lt;span style="color:#bb60d5">root&lt;/span>&lt;span style="color:#666">=&lt;/span>/dev/sda1 &lt;span style="color:#bb60d5">rootfstype&lt;/span>&lt;span style="color:#666">=&lt;/span>ext4 rootwait
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Remember to change the &lt;code>root=/dev/sda1&lt;/code> to whatever your partition is.&lt;/p>
&lt;p>&lt;code>sudo reboot now&lt;/code>&lt;/p>
&lt;p>Once the Raspberry Pi 4 reboot you should be able to login, run &lt;code>df -h&lt;/code> and the USB drive should show as being mounted on &lt;code>/&lt;/code> with a capacity of however many MB/GB/TB the external drive is.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">ubuntu@ubuntu:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G &lt;span style="color:#40a070">0&lt;/span> 1.9G 0% /dev
tmpfs 380M 3.9M 376M 2% /run
/dev/sda1 916G 2.2G 868G 1% /
tmpfs 1.9G &lt;span style="color:#40a070">0&lt;/span> 1.9G 0% /dev/shm
tmpfs 5.0M &lt;span style="color:#40a070">0&lt;/span> 5.0M 0% /run/lock
tmpfs 1.9G &lt;span style="color:#40a070">0&lt;/span> 1.9G 0% /sys/fs/cgroup
/dev/loop0 49M 49M &lt;span style="color:#40a070">0&lt;/span> 100% /snap/core18/1708
/dev/loop1 62M 62M &lt;span style="color:#40a070">0&lt;/span> 100% /snap/lxd/14808
/dev/loop2 24M 24M &lt;span style="color:#40a070">0&lt;/span> 100% /snap/snapd/7267
/dev/mmcblk0p1 253M 61M 192M 25% /boot/firmware
tmpfs 380M &lt;span style="color:#40a070">0&lt;/span> 380M 0% /run/user/1000
&lt;/code>&lt;/pre>&lt;/div>&lt;p>In my case, my external drive has 868G remaining and is only 1% used. The important value here is the Mounted on point. It must be &lt;code>/&lt;/code>.&lt;/p>
&lt;p>You can now customise the installation as much as you want as you&amp;rsquo;re booting from the external USB drive.&lt;/p></description></item><item><title>Starting Tmux Automagically When Connecting to SSH</title><link>https://blog.dmcc.io/journal/starting-tmux-automagically-when-connecting-to-ssh/</link><pubDate>Mon, 11 May 2020 21:26:55 +0100</pubDate><guid>https://blog.dmcc.io/journal/starting-tmux-automagically-when-connecting-to-ssh/#2025-12-28</guid><description>&lt;p>I recently noted in a &lt;a href="https://dmcc.io/posts/tmux-create-or-join/">blog post&lt;/a> that I use a short snippet to either connect to an existing tmux session when I start a new SSH connection, or create a new tmux session if an existing one doesn&amp;rsquo;t exist.&lt;/p>
&lt;p>The problem is, I have been using Termius more often than not recently and using the snippet feature as mentioned in the &lt;a href="https://dmcc.io/posts/tmux-create-or-join/">previous blog post&lt;/a>.&lt;/p>
&lt;p>I needed to make this work within a normal terminal today and thought I would add the snippet as a blog post so I don&amp;rsquo;t forget for next time.&lt;/p>
&lt;p>In &lt;code>~/.ssh/config&lt;/code> we just need to add the following:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">Host servername
HostName servername.domain.tld
User username
Port &lt;span style="color:#40a070">22&lt;/span>
RequestTTY yes
RemoteCommand tmux att -t session -d &lt;span style="color:#666">||&lt;/span> tmux new-session -s session
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then, just run &lt;code>ssh servername&lt;/code> and the SSH connection will be create and will either attach to an existing tmux session (and disconnect other sessions) or create a new one.&lt;/p></description></item><item><title>Flash Teckin Smart Plug for Home Assistant</title><link>https://blog.dmcc.io/journal/flash-teckin-smart-plug-for-home-assistant/</link><pubDate>Fri, 08 May 2020 17:35:37 +0100</pubDate><guid>https://blog.dmcc.io/journal/flash-teckin-smart-plug-for-home-assistant/#2025-12-28</guid><description>&lt;p>I have been using Teckin smart plugs around the house for quite some time now. They&amp;rsquo;re really handy as they integrate with Google Home via the Smart Life app ecosystem. This means that each night we can tell Google to &amp;ldquo;turn everything off&amp;rdquo; and our lamps all switch off at the wall socket.&lt;/p>
&lt;p>What I have wanted to do for some time, after seeing my friend do it with great success, was to flash the Teckin firmware to esphome so that I can control the sockets via Home Assistant and also get realtime values for the socket&amp;rsquo;s watt, amps and voltage loads. This would allow me to monitor the number of watts on things like the washing machine and then send myself a notification when the watts reduce after a specific period of time. So, when the washing machine has finished, shortly afterwards I&amp;rsquo;ll get a notification that it&amp;rsquo;s done. Simple.&lt;/p>
&lt;p>The first step is to get the &lt;a href="https://github.com/ct-Open-Source/tuya-convert">tuya-convert&lt;/a> files.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#60a0b0;font-style:italic"># git clone https://github.com/ct-Open-Source/tuya-convert&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># cd tuya-convert&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># ./install_prereq.sh&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Run &lt;code>./start_flash.sh&lt;/code>. It&amp;rsquo;s important to run this command and get the wifi network called &lt;code>vtrust-flash&lt;/code> working on a phone or second computer before powering on the smartplug - as the plug will look for this SSID on boot. If you notice the plug is broadcasing an SSID of &lt;code>vtrust-recovery&lt;/code> then this means the smartplug did not detect our flash attempt.&lt;/p>
&lt;p>When the smartplug successfully connects to the flash SSID it will then transfer the flashing firmware and prompt for which version you want to load. The options are &lt;code>tasmota-wifiman.bin&lt;/code> and &lt;code>espurna-base.bin&lt;/code>. I have only used &lt;code>espurna-base.bin&lt;/code> and it has worked perfectly.&lt;/p>
&lt;p>Once the flash process is completed you can then end the flashing script by choosing N to flash another device.&lt;/p>
&lt;p>Now we can connect to the smartplug on the new &lt;code>espurna-base.bin&lt;/code> firmware by connecting to the SSID &lt;code>ESPURNA-XXXX&lt;/code> with the password &lt;code>fibonacci&lt;/code>. Once connected we need to go to the web interface which is at &lt;code>192.168.4.1&lt;/code> and set a password. This password can be anything you want because it will be reset again shortly.&lt;/p>
&lt;p>Now that the smart plug is running espruna and we know we can connect to the web interface, it&amp;rsquo;s time to create our own esphome firmware to upload.&lt;/p>
&lt;p>We need &lt;code>esphome&lt;/code> which is available via pip. I had to use &lt;code>sudo pip3&lt;/code> to get mine working.&lt;/p>
&lt;p>Once esphome is installed (running esphome from your command prompt should display the available options, if it&amp;rsquo;s installed correctly) we need to create a yaml file with the required options.&lt;/p>
&lt;p>Here&amp;rsquo;s the one I used, with thanks to &lt;a href="https://github.com/samabsalom">Sam&lt;/a> for giving me this file which worked perfectly first time.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="color:#007020;font-weight:bold">esphome&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>${plug_name}&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>ESP8266&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">board&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>esp8285&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">wifi&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">ssid&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#39;YOUR HOME SSID HERE&amp;#39;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">password&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#39;YOUR HOME WIFI PASSWORD HERE&amp;#39;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">substitutions&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">plug_name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>teckin01&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Higher value gives lower watt readout&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">current_res&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;0.00221&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Lower value gives lower voltage readout&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">voltage_div&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;871&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Enable logging&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">logger&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Enable Home Assistant API&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">api&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">password&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#39;RANDOM PASSWORD HERE&amp;#39;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">ota&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">password&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#39;RANDOM PASSWORD HERE&amp;#39;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">time&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>homeassistant&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">id&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>homeassistant_time&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">binary_sensor&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>gpio&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">pin&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">number&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>GPIO13&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">inverted&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>True&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_button&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">on_press&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">switch.toggle&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>relay&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># this interects with switch.relay on the device not via hass&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">switch&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>gpio&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Relay&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">pin&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>GPIO15&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">restore_mode&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>ALWAYS_ON&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">id&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>relay&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic">#this line gives the entity an id so the teckin plug can do some onboard stuff - see button &lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>gpio&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_LED_Blue&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">pin&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>GPIO2&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">inverted&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>True&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">restore_mode&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>ALWAYS_OFF&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#007020;font-weight:bold">sensor&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>hlw8012&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">sel_pin&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">number&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>GPIO12&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">inverted&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>True&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">cf_pin&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>GPIO05&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">cf1_pin&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>GPIO014&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># # Higher value gives lower watt readout&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># current_resistor: ${current_res}&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># # Lower value gives lower voltage readout&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># voltage_divider: ${voltage_div}&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">current&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Amperage&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">unit_of_measurement&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>A&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">accuracy_decimals&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">3&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">filters&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Map from sensor -&amp;gt; measured value&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">calibrate_linear&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">0.0&lt;/span>&lt;span style="color:#bbb"> &lt;/span>-&amp;gt;&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0.008&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">10.33324&lt;/span>&lt;span style="color:#bbb"> &lt;/span>-&amp;gt;&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">8.212&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">lambda&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>if&lt;span style="color:#bbb"> &lt;/span>(x&lt;span style="color:#bbb"> &lt;/span>&amp;lt;&lt;span style="color:#bbb"> &lt;/span>(&lt;span style="color:#40a070">0.01&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">0.008&lt;/span>))&lt;span style="color:#bbb"> &lt;/span>return&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0&lt;/span>;&lt;span style="color:#bbb"> &lt;/span>else&lt;span style="color:#bbb"> &lt;/span>return&lt;span style="color:#bbb"> &lt;/span>(x&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">0.013&lt;/span>);&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">voltage&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Voltage&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">unit_of_measurement&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>V&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">filters&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Map from sensor -&amp;gt; measured value&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">calibrate_linear&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">0.0&lt;/span>&lt;span style="color:#bbb"> &lt;/span>-&amp;gt;&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0.0&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">640.98718&lt;/span>&lt;span style="color:#bbb"> &lt;/span>-&amp;gt;&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">241.0&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">power&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Wattage&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">unit_of_measurement&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>W&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">id&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Wattage&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">filters&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Map from sensor -&amp;gt; measured value&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">calibrate_linear&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">0.0&lt;/span>&lt;span style="color:#bbb"> &lt;/span>-&amp;gt;&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0.6&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">11640.70117&lt;/span>&lt;span style="color:#bbb"> &lt;/span>-&amp;gt;&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">1957&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Make everything below 2W appear as just 0W.&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Furthermore it corrects 1.14W for the power usage of the plug.&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">lambda&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>if&lt;span style="color:#bbb"> &lt;/span>(x&lt;span style="color:#bbb"> &lt;/span>&amp;lt;&lt;span style="color:#bbb"> &lt;/span>(&lt;span style="color:#40a070">2&lt;/span>&lt;span style="color:#bbb"> &lt;/span>+&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0.6&lt;/span>))&lt;span style="color:#bbb"> &lt;/span>return&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0&lt;/span>;&lt;span style="color:#bbb"> &lt;/span>else&lt;span style="color:#bbb"> &lt;/span>return&lt;span style="color:#bbb"> &lt;/span>(x&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#40a070">1.14&lt;/span>);&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">change_mode_every&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">3&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">update_interval&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>5s&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>total_daily_energy&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Total Daily Energy&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">power_id&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#4070a0">&amp;#34;${plug_name}_Wattage&amp;#34;&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">filters&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Multiplication factor from W to kW is 0.001&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">multiply&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#40a070">0.001&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">unit_of_measurement&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>kWh&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#60a0b0;font-style:italic"># Extra sensor to keep track of plug uptime&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#007020;font-weight:bold">platform&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>uptime&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#007020;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>${plug_name}_Uptime&lt;span style="color:#bbb"> &lt;/span>Sensor&lt;span style="color:#bbb">
&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Save the file as something sensible - I used teckin01.yaml and then run:&lt;/p>
&lt;p>&lt;code>sudo esphome teckin01.yaml compile&lt;/code>&lt;/p>
&lt;p>This will create the firmware and store it in a directory that is not even remotely obvious&amp;hellip;&lt;/p>
&lt;p>&lt;code>/home/username/projects/tuya-convert/teckin01/.pioenvs/teckin04/firmware.bin&lt;/code>&lt;/p>
&lt;p>Now we just go to the web interface of the plug (192.168.4.1), go to Admin and then at the bottom of the page we upload this firmware image and we&amp;rsquo;re done.&lt;/p>
&lt;p>The smart plug will reboot and will now be running esphome. Provided you used a unique name, the SSID and password are correct and Home Assistant is running - the new smart plug will pop up as a new device in home assistant.&lt;/p>
&lt;p>Magic.&lt;/p>
&lt;p>Note: I managed to get into a state of no response from one of the Teckin plugs when I was flashing a number of them at once. To rectify this I ran the following command:&lt;/p>
&lt;p>&lt;code>sudo esphome teckin01.yaml upload&lt;/code>&lt;/p>
&lt;p>This command works because the smart plug is broadcasting a DNS name on the LAN of teckin01.local which the esphome program recognises - even if it doesn&amp;rsquo;t have a usable IP address. Worth remembering.&lt;/p></description></item><item><title>Customise Remoteapp Work Resources Name</title><link>https://blog.dmcc.io/journal/customise-remoteapp-work-resources-name/</link><pubDate>Tue, 14 Apr 2020 11:56:53 +0100</pubDate><guid>https://blog.dmcc.io/journal/customise-remoteapp-work-resources-name/#2025-12-28</guid><description>&lt;p>When using RemoteApp on Microsoft Server 2016 I noticed that whenever you add the RemoteApp workspace feed to iOS or iPadOS devices, the resources are listed under &amp;lsquo;Work Resources&amp;rsquo;.&lt;/p>
&lt;p>Although this works perfectly, it becomes a problem when you connect to different RemoteApp servers and they are all titled &amp;lsquo;Work Resources&amp;rsquo;.&lt;/p>
&lt;p>The solution is simple and takes about one minute, assuming you don&amp;rsquo;t use an RDP gateway - if you do, it take a minute or so longer, that&amp;rsquo;s all.&lt;/p>
&lt;p>First open Powershell as Administrator and enter:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-powershell" data-lang="powershell">&lt;span style="color:#007020">Import-Module&lt;/span> RemoteDesktop
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then at the next prompt enter the following:&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-powershell" data-lang="powershell"> &lt;span style="color:#007020">Set-RDWorkspace&lt;/span> -Name &lt;span style="color:#4070a0">&amp;#34;DMCC Cloud&amp;#34;&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>Obviously you can change DMCC Cloud to anything you want.&lt;/p></description></item><item><title>Mikrotik Failover Netwatch</title><link>https://blog.dmcc.io/journal/mikrotik-failover-netwatch/</link><pubDate>Mon, 13 Apr 2020 16:49:26 +0100</pubDate><guid>https://blog.dmcc.io/journal/mikrotik-failover-netwatch/#2025-12-28</guid><description>&lt;p>Over the years I have used many different methods of failover for primary to secondary, sometime tietary, wan links on Mikrotik devices. Along with manual routing table entries, I have always relied on scripts of some sort that are triggered when one of the WAN links goes down. I have had varying success with this approach. One of the biggest problems I have had when switching from a primary WAN to a secondary WAN is the registration of VoIP phones seems to hang. The PBX keeps the registration open for the old WAN IP and the phones are now establishing connections from the new WAN IP.&lt;/p>
&lt;p>The solution I have recently discovered on Mikrotik devices is Netwatch. You can find Netwatch within the Mikrotik Tools section. I believe Netwatch is available on all licence levels of Mikrotik RouterOS.&lt;/p>
&lt;p>The principle of Netwatch is very simple. It pings an IP address of your choice on a user defined interval. When that ping fails (again, the failure threshold is user defined) then it runs any command within the Netwatch DOWN section. Likewise, when the link comes back up (the ping succeeds past the user defined threshold) then the command in Netwatch UP is run.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">/ip route &lt;span style="color:#007020">set&lt;/span> &lt;span style="color:#666">[&lt;/span>find where &lt;span style="color:#bb60d5">comment&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;WAN1&amp;#34;&lt;/span>&lt;span style="color:#666">]&lt;/span> &lt;span style="color:#bb60d5">distance&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#40a070">5&lt;/span>
/ip firewall connection remove &lt;span style="color:#666">[&lt;/span>find dst-address&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;123.123.248.8:5060&amp;#34;&lt;/span>&lt;span style="color:#666">]&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The above commands are in the UP command section.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">/ip route &lt;span style="color:#007020">set&lt;/span> &lt;span style="color:#666">[&lt;/span>find where &lt;span style="color:#bb60d5">comment&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;WAN1&amp;#34;&lt;/span>&lt;span style="color:#666">]&lt;/span> &lt;span style="color:#bb60d5">distance&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#40a070">15&lt;/span>
/ip firewall connection remove &lt;span style="color:#666">[&lt;/span>find dst-address&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;123.123.248.8:5060&amp;#34;&lt;/span>&lt;span style="color:#666">]&lt;/span>
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The above commands are in the DOWN command section.&lt;/p>
&lt;p>As you can see, when the link is up the WAN1 route is set to a distance of 5 which makes it the primary route. WAN2 should have a route distance of 10 in this example. When the link is DOWN then the WAN1 route distance is set to 15, at which point WAN2 with a distance of 10 becomes the primary route.&lt;/p>
&lt;p>The second command in each of the UP and DOWN list of commands is the magic sauce. This is the command that clears any existing connections to the PBX server (in this case the example IP of 123.123.248.8:5060) which forces the SIP handsets to re-register. Interestingly, in most cases the SIP handsets don&amp;rsquo;t drop existing calls but they do re-register with the PBX which is exactly what I need.&lt;/p>
&lt;p>I set the Netwatch host command to 1.1.1.1, an interval of 10 seconds and a timeout of 999ms. This means every 10 seconds the Netwatch feature will ping 1.1.1.1 and if it takes more than 999ms to respond then it will run the DOWN commands. If the ping response is ever less than 999ms then it will run the UP commands. With time I may tweak this as realistically a ping response time of greater than 100ms is likely evidence of a disrupted link.&lt;/p></description></item><item><title>Tmux Create or Join</title><link>https://blog.dmcc.io/journal/tmux-create-or-join/</link><pubDate>Mon, 13 Apr 2020 16:36:34 +0100</pubDate><guid>https://blog.dmcc.io/journal/tmux-create-or-join/#2025-12-28</guid><description>&lt;p>I have been using tmux as an alternative to screen for a couple of years so far. It works very well and I very quickly got used to the shortcuts for it - especially after changing the shortcut key to CTRL + A.&lt;/p>
&lt;p>One thing that I wanted to do was ensure that I was always in a tmux session. There&amp;rsquo;s little worse than running a command that is going to take some time but realising that you&amp;rsquo;re not in tmux session and you can&amp;rsquo;t quit the SSH session until the command completes.&lt;/p>
&lt;p>With the help of Termius (SSH client), I created a snippet to be run whenever I connect to certain SSH hosts.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">tmux att -t session -d &lt;span style="color:#666">||&lt;/span> tmux new-session -s session
&lt;/code>&lt;/pre>&lt;/div>&lt;p>The snippet is simple - it attaches to an existing tmux session if it exists, if not, it will make a new one.&lt;/p>
&lt;p>Since putting this in place a few months ago (maybe a year ago, time flies) I have never worked outside of a tmux session. I&amp;rsquo;ve never had to cancel a job to then re-run it within a tmux session and I have always been able to jump right back in to where I left off.&lt;/p>
&lt;p>If you are interested, my tmux and other dotfiles are on my &lt;a href="https://github.com/dannymcc/dotfiles" title="GitHub dannymcc">Github&lt;/a> profile.&lt;/p></description></item><item><title>PiVPN DNS Name</title><link>https://blog.dmcc.io/journal/pivpn-dns-name/</link><pubDate>Sun, 12 Apr 2020 00:11:15 +0100</pubDate><guid>https://blog.dmcc.io/journal/pivpn-dns-name/#2025-12-28</guid><description>&lt;p>I have used &lt;a href="https://www.pivpn.io/" title="PiVPN">PiVPN&lt;/a> for many projects over the past few years. It makes setting up a VPN gateway really easy and now supports Wireguard as well as OpenVPN. I have started using it more often since they added the ability for it to be installed on a vanilla Ubuntu installation, it no longer needs to be run on a Raspberry Pi.&lt;/p>
&lt;p>One of the problems I have come across a few time with PiVPN is when the public IP address of DNS name of the VPN gateway changes. The VPN configuration file still have the original IP or DNS name. Originally I just manually updated each configuration file to the new address but this gets frustrating very quickly.&lt;/p>
&lt;p>I found the solution in the Github Issues. There&amp;rsquo;s a default config header file that you can edit and this is included in the top of all new configuration files. Simply edit this file to match the new IP or DNS name and you&amp;rsquo;re good to go.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">/etc/openvpn/easy-rsa/pki/Default.txt
&lt;/code>&lt;/pre>&lt;/div>&lt;p>This is also the place to add additional parameters to the configuration files such as route-nopull to prevent the VPN server from forcing its routes on the remote user. This makes split tunnel configurations easier.&lt;/p></description></item><item><title>Using SIP on Cisco 7906G Handsets</title><link>https://blog.dmcc.io/journal/cisco-sip-conversion/</link><pubDate>Sun, 05 Apr 2020 19:47:45 +0100</pubDate><guid>https://blog.dmcc.io/journal/cisco-sip-conversion/#2025-12-28</guid><description>&lt;p>After a lot of trial and error getting a test Cisco 7906G handset working with an Asterisk PBX I thought it would be useful to make a note of the configuration options I used and the files that finally worked. They are all hosted on [GitHub]&lt;a href="https://github.com/dannymcc/Cisco-7906G-SIP">https://github.com/dannymcc/Cisco-7906G-SIP&lt;/a>.&lt;/p>
&lt;h3 id="tftpdhcp-configuration">TFTP/DHCP Configuration&lt;/h3>
&lt;p>During boot the handsets will discover the TFTP server via DHCP option 66 and/or DHCP option 150. For this example I set option 66 of our VoIP VLAN to the local IP address of our TFTP server. All of the configuration files, firmware files and other customisation files reside in the root directory of the TFTP server, in this case it was &lt;code>\TFTPBOOT&lt;/code>.&lt;/p>
&lt;h3 id="base-configuration-files">Base Configuration Files&lt;/h3>
&lt;p>Each handset will require an SEP configuration file. The filename must be &lt;code>SEP000000000000.cnf.xml&lt;/code> where the &lt;code>000000000000&lt;/code> is the MAC address of the Cisco handset.&lt;/p>
&lt;p>In the example SEP configuration file, the following values have been set:&lt;/p>
&lt;pre>&lt;code>sip.provider.com &amp;lt;!-- This is the FQDN of the PBX, it can also be an IP address --&amp;gt;
123.123.123.123 &amp;lt;!-- This is the public IP address of the network the handset resides on, this if for NAT and may not be needed --&amp;gt;
222 &amp;lt;!-- This is the numerical extension number the handset will have, this must exist on the remote PBX first --&amp;gt;
pbx-username &amp;lt;!-- This is the register username for the extension --&amp;gt;
Pa$$w0rd &amp;lt;!-- This is the register password for the extension --&amp;gt;
*55 &amp;lt;!-- This is the direct dial for the PBX's voicemail --&amp;gt;
&lt;/code>&lt;/pre>
&lt;p>All other settings can be ignored for the purpose of the inital configuration. You must change every occurance of the above settings throughout the configuration file.&lt;/p>
&lt;h3 id="firmware-versions-and-upgrading">Firmware Versions and Upgrading&lt;/h3>
&lt;p>The firmware version that has been tested for these configuration files is &lt;code>SIP11.9-4-2SR1-1S&lt;/code>. You can download the firmware files directly from Cisco - you will need to register for a free account. At the time of writing the link for the firmware files is &lt;a href="https://software.cisco.com/download/release.html?mdfid=280607214&amp;amp;softwareid=282074288&amp;amp;os=&amp;amp;release=9.4(2)SR3&amp;amp;relind=AVAILABLE&amp;amp;rellifecycle=&amp;amp;reltype=latest&amp;amp;i=!pp">here&lt;/a>, however, I have included the firmware files in this repository for reference.&lt;/p>
&lt;h3 id="dialplan">Dialplan&lt;/h3>
&lt;p>The included &lt;code>dialplan.xml&lt;/code> gives some examples available. The dialplan file tells the handset how long to pause before dialling a number once it has been entered.&lt;/p>
&lt;h3 id="ringtones">Ringtones&lt;/h3>
&lt;p>I have included a &lt;code>ringlist.xml&lt;/code> file as an example of how to add new ringtones to the handsets. If you monitor the TFTP server logs when navigating the handset menu and requesting a new background or ringtone, you will see which files the handset requests. This is very useful when setting up the TFTP file structure.&lt;/p>
&lt;p>When including ringtone .raw files, it&amp;rsquo;s simplest to include them in the root directory of the TFTP server along with the firmware and configuration files.&lt;/p></description></item><item><title>Converting and Moving a file using a simple bash script.</title><link>https://blog.dmcc.io/journal/bash-script-to-moveandconvert/</link><pubDate>Sun, 05 Apr 2020 19:27:45 +0100</pubDate><guid>https://blog.dmcc.io/journal/bash-script-to-moveandconvert/#2025-12-28</guid><description>&lt;p>I recently needed to receive .txt files from a laboratory analyser, convert it to PDF and then transfer it to a local SMB share - but - the analyser results text file has to be over 30 minutes old before the whole process begins.
Once the file has been converted, it needs to be deleted from the analyser server sl that it doesn&amp;rsquo;t fill up the local hard drive.
The analyser server in this instance was a Raspberry Pi 3 B+ with a LAN connection.
The reason for the 30 minute delay is due to the analyser exporting the results of a sample line-by-line and a full sample analysis can take up to 30 minutes. So if we transferred the file in less than 30 minutes then it may not be the complete results.&lt;/p>
&lt;div class="highlight">&lt;pre style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4">&lt;code class="language-bash" data-lang="bash">&lt;span style="color:#007020">#!/bin/bash
&lt;/span>&lt;span style="color:#007020">&lt;/span>&lt;span style="color:#60a0b0;font-style:italic">## Danny McClelland 2020&lt;/span>
&lt;span style="color:#007020">shopt&lt;/span> -s nullglob
&lt;span style="color:#007020;font-weight:bold">for&lt;/span> f in /home/pi/results/eclipse/*.txt; &lt;span style="color:#007020;font-weight:bold">do&lt;/span>
&lt;span style="color:#bb60d5">FILENAME&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#bb60d5">$f&lt;/span>
&lt;span style="color:#bb60d5">PDFNAME&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#007020;font-weight:bold">$(&lt;/span>basename &lt;span style="color:#bb60d5">$f&lt;/span> .txt&lt;span style="color:#007020;font-weight:bold">)&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># If file is over 30 minutes old then create PDF and remove TXT&lt;/span>
&lt;span style="color:#007020;font-weight:bold">if&lt;/span> &lt;span style="color:#007020">test&lt;/span> &lt;span style="color:#4070a0">`&lt;/span>find &lt;span style="color:#bb60d5">$FILENAME&lt;/span> -mmin +30&lt;span style="color:#4070a0">`&lt;/span>
&lt;span style="color:#007020;font-weight:bold">then&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Create PDF from TXT&lt;/span>
/usr/bin/enscript &lt;span style="color:#bb60d5">$f&lt;/span> -B -o - | /usr/bin/ps2pdf - &lt;span style="color:#bb60d5">$PDFNAME&lt;/span>.pdf
&lt;span style="color:#60a0b0;font-style:italic"># Remove original TXT file&lt;/span>
rm &lt;span style="color:#bb60d5">$f&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Copy file to SMB share&lt;/span>
/usr/bin/smbclient //192.168.50.100/virtualdrive -U DOMAIN/username%password -D LAB -c &lt;span style="color:#4070a0">&amp;#34;put &lt;/span>&lt;span style="color:#bb60d5">$PDFNAME&lt;/span>&lt;span style="color:#4070a0">.pdf&amp;#34;&lt;/span>
&lt;span style="color:#60a0b0;font-style:italic"># Remove PDF file from local directory&lt;/span>
rm &lt;span style="color:#bb60d5">$PDFNAME&lt;/span>
&lt;span style="color:#007020;font-weight:bold">else&lt;/span>
&lt;span style="color:#007020">echo&lt;/span> &lt;span style="color:#bb60d5">$f&lt;/span> too new - skipping...
&lt;span style="color:#007020;font-weight:bold">fi&lt;/span>
&lt;span style="color:#007020;font-weight:bold">done&lt;/span>
&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>First Post</title><link>https://blog.dmcc.io/journal/first-post/</link><pubDate>Sun, 05 Apr 2020 13:59:45 +0100</pubDate><guid>https://blog.dmcc.io/journal/first-post/#2025-12-28</guid><description>&lt;p>&lt;img src="https://images.unsplash.com/photo-1462795532207-33cabf8c8175?ixlib=rb-1.2.1&amp;amp;q=80&amp;amp;fm=jpg&amp;amp;crop=entropy&amp;amp;cs=tinysrgb&amp;amp;dl=ian-schneider-jbroe3pOt8M-unsplash.jpg" alt="Hello beautiful">&lt;/p>
&lt;p>This is the first post of this new blog. I have made many blogs over the years and not one of them lasts more than a few weeks, or in some cases, days.
My aim is to use this as a unified location for everything that I want to remember in the future. Let&amp;rsquo;s see how that goes&amp;hellip;
It&amp;rsquo;s entirely possible that this blog will not be updated very often, but, I hope it is. To help me feel less pressure in keeping the blog up to date, I won&amp;rsquo;t be advertising or publishing the blog address anywhere.&lt;/p>
&lt;h2 id="technical">Technical&lt;/h2>
&lt;p>For the first time, I am using Hugo to write the blog and Netifly to host it. So far, it seems okay - time will tell. Netifly&amp;rsquo;s free package includes up to 100GB of traffic so that is pretty much guarenteed to cover me for this blog.&lt;/p>
&lt;p>I am using a ready-made theme for now while I get the hang of Hugo. I&amp;rsquo;m hoping to make my own theme at some point in the future.&lt;/p></description></item></channel></rss>