<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Mohsin's Blog]]></title><description><![CDATA[Mobile Engineer with an interest in Distributed Systems]]></description><link>https://blog.mohsin.xyz</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 14:38:37 GMT</lastBuildDate><atom:link href="https://blog.mohsin.xyz/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Protocol Nobody Uses]]></title><description><![CDATA[I was staring at a packet capture, trying to figure out why my script wasn't receiving audio data.
The device was clearly working. LED on. App showing audio levels. But when I subscribed to the audio characteristic—the standard approach for every BLE...]]></description><link>https://blog.mohsin.xyz/the-protocol-nobody-uses</link><guid isPermaLink="true">https://blog.mohsin.xyz/the-protocol-nobody-uses</guid><category><![CDATA[ble]]></category><category><![CDATA[Bluetooth Low Energy]]></category><category><![CDATA[networking]]></category><category><![CDATA[Wearable Technology]]></category><dc:creator><![CDATA[Mohammed Mohsin]]></dc:creator><pubDate>Fri, 02 Jan 2026 12:52:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767359505214/f9b1589a-2a52-4768-a71a-ebee2273cd52.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was staring at a packet capture, trying to figure out why my script wasn't receiving audio data.</p>
<p>The device was clearly working. LED on. App showing audio levels. But when I subscribed to the audio characteristic—the standard approach for every BLE audio device I'd ever worked with—nothing came through.</p>
<p>So, I opened Wireshark as usual and examined the raw packets.</p>
<p>The GATT traffic appeared normal. There was service discovery, characteristic reads, and control commands being exchanged. Then, I noticed something unusual: audio-sized packets were moving through a channel I didn't recognize.</p>
<p>CID <code>0x0041</code>. That's not GATT.</p>
<p>GATT runs on CID <code>0x0004</code>, a fixed channel for the ATT protocol. What I was looking at was a dynamic channel, an L2CAP Connection-Oriented Channel.</p>
<p>I'd read about this in the Bluetooth spec sometime ago and promptly forgotten about it, because I'd never seen it used in the real world.</p>
<p>Someone was actually doing it differently.</p>
<h2 id="heading-how-bluetooth-le-actually-works">How Bluetooth LE Actually Works</h2>
<p>Before I explain why this matters, let me walk you through how BLE moves data. If you've worked with BLE before, you probably interact with GATT: services, characteristics, notifications. But GATT is just the top layer of a stack, and understanding the layers below it explains why there's a better way.</p>
<h3 id="heading-the-stack">The Stack</h3>
<pre><code class="lang-plaintext">┌─────────────────────────────────────┐
│           Application               │  ← Your code
├─────────────────────────────────────┤
│      GATT (Generic Attribute)       │  ← Services &amp; characteristics
├─────────────────────────────────────┤
│      ATT (Attribute Protocol)       │  ← Read/write/notify operations
├─────────────────────────────────────┤
│             L2CAP                   │  ← Packet framing &amp; channels
├─────────────────────────────────────┤
│          Link Layer                 │  ← Radio packets
├─────────────────────────────────────┤
│         Physical Layer              │  ← 2.4 GHz radio
└─────────────────────────────────────┘
</code></pre>
<p><strong>Physical Layer</strong>: The actual radio, broadcasting at 2.4 GHz.</p>
<p><strong>Link Layer</strong>: Handles the raw radio packets. This is where Bluetooth defines how devices advertise, connect, and exchange data over the air. The original BLE spec (4.0) limited each packet to 27 bytes of payload. Bluetooth 4.2 introduced Data Length Extension (DLE), allowing up to 251 bytes.</p>
<p><strong>L2CAP (Logical Link Control and Adaptation Protocol)</strong>: Think of this as the postal service. It takes data from higher layers, adds a 4-byte header with length and channel ID, and hands it to the Link Layer. It can also segment large messages across multiple packets and reassemble them on the other side.</p>
<p><strong>ATT (Attribute Protocol)</strong>: Defines a simple database of "attributes"—small pieces of data identified by handles (like memory addresses). ATT provides operations to read, write, and get notifications about these attributes.</p>
<p><strong>GATT (Generic Attribute Profile)</strong>: Builds on ATT to create a hierarchical structure. Attributes are grouped into "characteristics" (a value plus metadata), which are grouped into "services" (a collection of related characteristics). This is the layer most developers interact with.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767355681085/9621ead7-9116-4b50-908a-1077c96478b9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-how-data-flows-in-gatt">How Data Flows in GATT</h3>
<p>When your fitness tracker sends your heart rate to your phone, here's what happens:</p>
<ol>
<li><p>Your phone discovers the Heart Rate Service (UUID <code>0x180D</code>)</p>
</li>
<li><p>Inside that service, it finds the Heart Rate Measurement characteristic</p>
</li>
<li><p>Your phone enables notifications by writing to a special descriptor (CCCD)</p>
</li>
<li><p>The tracker sends notifications whenever your heart rate changes</p>
</li>
<li><p>Each notification travels: GATT → ATT → L2CAP → Link Layer → Radio → Phone</p>
</li>
</ol>
<p>The key point is that GATT and ATT always operate on a fixed L2CAP channel (CID <code>0x0004</code>). Every service, characteristic, and notification is sent through this single channel.</p>
<p>This works great for small, infrequent data like heart rate or temperature readings. It gets awkward for streams.</p>
<h2 id="heading-the-problem-with-gatt-for-streaming">The Problem with GATT for Streaming</h2>
<p>Let's do the math on what happens when you push audio through GATT.</p>
<h3 id="heading-packet-size-constraints">Packet Size Constraints</h3>
<p>The Link Layer defines how much data fits in a single over-the-air packet:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>BLE Version</td><td>Link Layer Payload</td><td>L2CAP Header</td><td>Available for ATT</td></tr>
</thead>
<tbody>
<tr>
<td>4.0 / 4.1</td><td>27 bytes</td><td>4 bytes</td><td>23 bytes</td></tr>
<tr>
<td>4.2+ (DLE)</td><td>251 bytes</td><td>4 bytes</td><td>247 bytes</td></tr>
</tbody>
</table>
</div><p>That 23-byte or 247-byte figure is your ATT_MTU, the maximum size of an ATT operation.</p>
<p>But wait, there's more overhead. A GATT notification needs:</p>
<ul>
<li><p>1 byte for the ATT opcode (notification = <code>0x1B</code>)</p>
</li>
<li><p>2 bytes for the attribute handle</p>
</li>
</ul>
<p>So your actual payload per notification is ATT_MTU minus 3:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Scenario</td><td>ATT_MTU</td><td>Notification Payload</td></tr>
</thead>
<tbody>
<tr>
<td>Default (no DLE)</td><td>23 bytes</td><td><strong>20 bytes</strong></td></tr>
<tr>
<td>With DLE</td><td>247 bytes</td><td><strong>244 bytes</strong></td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767355760176/84d0ad6f-bc91-463f-845e-9dfeffa354db.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-what-this-means-for-audio">What This Means for Audio</h3>
<p>Say you're streaming 16-bit audio at 16 kHz mono. That's 32 KB per second of raw PCM data.</p>
<p><strong>Without DLE (20-byte notifications)</strong>:</p>
<ul>
<li><p>32,000 ÷ 20 = 1,600 notifications per second</p>
</li>
<li><p>Each notification has 7 bytes of overhead (L2CAP + ATT headers)</p>
</li>
<li><p>You're sending 11,200 bytes of overhead per second just in headers</p>
</li>
</ul>
<p><strong>With DLE (244-byte notifications)</strong>:</p>
<ul>
<li><p>32,000 ÷ 244 = ~131 notifications per second</p>
</li>
<li><p>Much more reasonable, but still constrained</p>
</li>
</ul>
<p>Most devices use compressed audio (Opus, AAC) at lower bitrates, so the math isn't quite this brutal. But the fundamental problem remains.</p>
<h3 id="heading-the-bigger-problem-no-flow-control">The Bigger Problem: No Flow Control</h3>
<p>Here's what really hurts: GATT notifications are fire-and-forget.</p>
<p>The server sends a notification. The client either receives it or doesn't. There's no acknowledgment, no backpressure, no way for the client to say "slow down, I'm busy."</p>
<p>If the client's Bluetooth stack gets temporarily overwhelmed due to a CPU spike, garbage collection, or another app using Bluetooth, packets can just vanish. The server keeps sending without realizing, and your audio experiences glitches.</p>
<p>For a heart rate notification every second, this rarely matters. For 131 audio packets per second, it's a real problem.</p>
<h2 id="heading-the-alternative-l2cap-connection-oriented-channels">The Alternative: L2CAP Connection-Oriented Channels</h2>
<p>Remember that L2CAP layer sitting below ATT? It turns out you can use it directly, bypassing GATT entirely.</p>
<p>L2CAP Connection-Oriented Channels (CoC) were added in Bluetooth 4.1. Instead of shoving everything through the fixed ATT channel, you open a dedicated channel for your data stream.</p>
<h3 id="heading-how-it-works">How It Works</h3>
<pre><code class="lang-plaintext">Central (Phone)                      Peripheral (Device)
       │                                      │
       │══ LE Credit Based Connection ═══════►│
       │   PSM: 0x0080                        │
       │   MTU: 2048                          │
       │   MPS: 247                           │
       │   Initial Credits: 10                │
       │                                      │
       │◄═════ Connection Response ═══════════│
       │       Assigned CID: 0x0041           │
       │       Credits: 10                    │
       │                                      │
       │◄══════════ Data ════════════════════ │
       │◄══════════ Data ════════════════════ │
       │◄══════════ Data ════════════════════ │
       │                                      │
       │═══════ More Credits ═══════════════► │
       │           (flow control)             │
       │                                      │
</code></pre>
<p>Let me break down the terminology:</p>
<p><strong>PSM (Protocol/Service Multiplexer)</strong>: Like a port number in TCP. Identifies what protocol or service this channel is for. Values <code>0x0001</code>–<code>0x007F</code> are reserved by the Bluetooth SIG. Values <code>0x0080</code>–<code>0x00FF</code> are for custom applications.</p>
<p><strong>CID (Channel Identifier)</strong>: A unique ID for this specific channel on this specific connection. Dynamic channels use CIDs from <code>0x0040</code> to <code>0x007F</code>.</p>
<p><strong>MTU (Maximum Transmission Unit)</strong>: The largest "message" (SDU: Service Data Unit) you can send. The spec allows up to 65,535 bytes, though memory constraints usually limit this to a few KB.</p>
<p><strong>MPS (Maximum PDU Size)</strong>: The largest single packet (PDU: Protocol Data Unit) on this channel. L2CAP will automatically segment larger SDUs into MPS-sized chunks.</p>
<p><strong>Credits</strong>: Here's the magic. Each credit allows the sender to transmit one PDU. When you run out of credits, you stop sending. The receiver grants more credits when it's ready for more data.</p>
<h3 id="heading-credit-based-flow-control">Credit-Based Flow Control</h3>
<p>This is the key difference from GATT.</p>
<pre><code class="lang-plaintext">Initial state:
  Device has 10 credits from Phone

Device sends audio packet #1 → Credits remaining: 9
Device sends audio packet #2 → Credits remaining: 8
Device sends audio packet #3 → Credits remaining: 7
...
Device sends audio packet #10 → Credits remaining: 0

Device must wait...

Phone finishes processing, sends 8 more credits
Device receives credits → Credits available: 8
Device resumes sending
</code></pre>
<p>If the phone's app is busy, it doesn't grant more credits. The device waits instead of flooding packets into the void. When the phone catches up, it grants credits and data flows again.</p>
<p>No dropped packets. No glitches from buffer overflow. The sender always knows the receiver is ready.</p>
<h2 id="heading-side-by-side-comparison">Side-by-Side Comparison</h2>
<p>Let me put the two approaches next to each other:</p>
<h3 id="heading-gatt-notification-with-dle">GATT Notification (with DLE)</h3>
<pre><code class="lang-plaintext">┌─────────────────┬────────────┬──────────────────────┐
│ L2CAP Header    │ ATT Header │ Payload              │
│ (4 bytes)       │ (3 bytes)  │ (up to 244 bytes)    │
└─────────────────┴────────────┴──────────────────────┘

Channel: Fixed (CID 0x0004)
Max payload: 244 bytes per notification
Max characteristic value: 512 bytes total
Flow control: None
Discovery required: Yes (services, characteristics, CCCD)
</code></pre>
<h3 id="heading-l2cap-coc-k-frame-with-dle">L2CAP CoC K-frame (with DLE)</h3>
<pre><code class="lang-plaintext">┌─────────────────┬──────────────────────────────────┐
│ L2CAP Header    │ Payload                          │
│ (4 bytes)       │ (up to 247 bytes)                │
└─────────────────┴──────────────────────────────────┘
(First frame of SDU includes 2-byte SDU length field)

Channel: Dynamic (CID 0x0040–0x007F)  
Max payload: 247 bytes per PDU, up to 65,535 bytes per SDU
Flow control: Credit-based
Discovery required: No (just need to know the PSM)
</code></pre>
<p>The per-packet efficiency is similar, both are around 97% payload. The wins for CoC are:</p>
<ol>
<li><p>Flow control prevents data loss under load</p>
</li>
<li><p>Larger logical units (64KB SDU vs 512-byte characteristic)</p>
</li>
<li><p>No GATT overhead (service discovery, handles, CCCDs)</p>
</li>
<li><p>Symmetric bidirectional (both sides equally efficient)</p>
</li>
</ol>
<h2 id="heading-the-ecosystem-gap">The Ecosystem Gap</h2>
<p>When I tried to work with L2CAP CoC from my usual tools, I ran into a wall.</p>
<pre><code class="lang-python"><span class="hljs-comment"># bleak - the standard Python BLE library</span>
<span class="hljs-comment"># L2CAP CoC support: None</span>
<span class="hljs-comment"># GitHub issue #598: closed as wontfix</span>
</code></pre>
<p>Bleak is a GATT client. It doesn't expose L2CAP directly, and the maintainers have decided that's outside scope.</p>
<p>The pattern repeats across cross-platform tools:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Platform / Library</td><td>GATT</td><td>L2CAP CoC</td></tr>
</thead>
<tbody>
<tr>
<td>iOS (CoreBluetooth)</td><td>✅</td><td>✅ (since iOS 11)</td></tr>
<tr>
<td>Android</td><td>✅</td><td>✅ (since API 29)</td></tr>
<tr>
<td>Web Bluetooth</td><td>✅</td><td>❌</td></tr>
<tr>
<td>Python (bleak)</td><td>✅</td><td>❌</td></tr>
<tr>
<td>Flutter (flutter_blue_plus)</td><td>✅</td><td>❌</td></tr>
<tr>
<td>React Native (ble-plx)</td><td>✅</td><td>Native modules only</td></tr>
</tbody>
</table>
</div><p>The native SDKs support it. The cross-platform libraries don't. If you're building with Flutter or React Native; which many teams choose for faster iteration, you'd need to drop into Swift and Kotlin separately.</p>
<p>That's not a dealbreaker for everyone, but it explains why most tutorials, Stack Overflow answers, and sample code stick to GATT.</p>
<h3 id="heading-the-documentation-gap">The Documentation Gap</h3>
<p>Search "BLE GATT tutorial" and you'll find hundreds of results. Nordic, TI, Espressif, Silicon Labs—every chip vendor publishes getting-started guides with working sample projects.</p>
<p>Search "BLE L2CAP CoC tutorial" and you get the Bluetooth Core Specification and a handful of sparse API references.</p>
<p>When I needed to understand the credit-based flow control details, I ended up reading the spec. It's not that the information doesn't exist, it's that nobody has packaged it into the kind of step-by-step guides that make GATT feel approachable.</p>
<h2 id="heading-when-coc-makes-sense">When CoC Makes Sense</h2>
<p>Despite the tooling gap, L2CAP CoC is worth considering for specific use cases:</p>
<p><strong>Streaming where reliability matters</strong>: If dropped packets mean audible glitches or corrupted data, credit-based flow control prevents the silent failures that GATT notifications allow.</p>
<p><strong>Bulk transfers</strong>: Firmware updates, file sync, log downloads. The 512-byte characteristic limit in GATT requires chunking logic. CoC can send larger SDUs natively.</p>
<p><strong>Controlled ecosystems</strong>: If you build both the firmware and the app—and don't need third-party integrations—the compatibility concerns shrink. You're writing native code anyway.</p>
<p><strong>Bidirectional real-time data</strong>: Control systems where both directions need equal efficiency and guaranteed delivery.</p>
<p>A clean architecture separates concerns:</p>
<pre><code class="lang-plaintext">GATT (control plane)
├── Device configuration
├── Status queries  
├── PSM advertisement
└── Standard services (Device Info, Battery)

L2CAP CoC (data plane)
├── Audio streaming
├── File transfer
└── High-frequency sensor data
</code></pre>
<p>Use the database for database things. Use the pipe for pipe things.</p>
<h2 id="heading-le-audio-changes-the-equation">LE Audio Changes the Equation</h2>
<p>Bluetooth LE Audio is now shipping on recent devices; AirPods Pro (2nd gen), iPhone 14 and later, Samsung Galaxy S23 series, Pixel 7 and up. The LC3 codec and isochronous channels provide a standardized path for audio streaming that's built into the spec.</p>
<p>For new products targeting current hardware, LE Audio is the right answer. It handles the codec, the transport, and the synchronization. You don't need to roll your own.</p>
<p>But LE Audio requires Bluetooth 5.2+ hardware on both ends. The installed base of older devices—phones from 2020, fitness trackers, smart home gadgets—won't get LE Audio support through software updates. That tail is long.</p>
<p>If you're building for the current generation, use LE Audio. If you need to support older devices, or you're working on something LE Audio doesn't cover (non-audio bulk data, custom protocols), L2CAP CoC remains the better-than-GATT option that most developers don't know exists.</p>
<h2 id="heading-quick-reference">Quick Reference</h2>
<h3 id="heading-key-numbers">Key Numbers</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Parameter</td><td>Default (BLE 4.0/4.1)</td><td>With DLE (BLE 4.2+)</td></tr>
</thead>
<tbody>
<tr>
<td>Link Layer payload</td><td>27 bytes</td><td>251 bytes</td></tr>
<tr>
<td>L2CAP header</td><td>4 bytes</td><td>4 bytes</td></tr>
<tr>
<td>ATT_MTU</td><td>23 bytes</td><td>247 bytes (optimal)</td></tr>
<tr>
<td>GATT notification payload</td><td>20 bytes</td><td>244 bytes</td></tr>
<tr>
<td>Max characteristic value</td><td>512 bytes</td><td>512 bytes</td></tr>
<tr>
<td>L2CAP CoC SDU max</td><td>65,535 bytes</td><td>65,535 bytes</td></tr>
</tbody>
</table>
</div><h3 id="heading-platform-support">Platform Support</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Platform</td><td>GATT</td><td>L2CAP CoC</td><td>Since</td></tr>
</thead>
<tbody>
<tr>
<td>iOS (CoreBluetooth)</td><td>✅</td><td>✅</td><td>iOS 11 (2017)</td></tr>
<tr>
<td>Android</td><td>✅</td><td>✅</td><td>API 29 (2019)</td></tr>
<tr>
<td>Web Bluetooth</td><td>✅</td><td>❌</td><td>—</td></tr>
<tr>
<td>Python (bleak)</td><td>✅</td><td>❌</td><td>—</td></tr>
<tr>
<td>Flutter</td><td>✅</td><td>⚠️ Native only</td><td>—</td></tr>
<tr>
<td>React Native</td><td>✅</td><td>⚠️ Native only</td><td>—</td></tr>
</tbody>
</table>
</div><h3 id="heading-l2cap-coc-terminology">L2CAP CoC Terminology</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term</td><td>Meaning</td></tr>
</thead>
<tbody>
<tr>
<td>PSM</td><td>Protocol/Service Multiplexer - like a port number</td></tr>
<tr>
<td>CID</td><td>Channel Identifier - unique ID for this channel</td></tr>
<tr>
<td>MTU</td><td>Maximum SDU size (logical message)</td></tr>
<tr>
<td>MPS</td><td>Maximum PDU size (single packet)</td></tr>
<tr>
<td>SDU</td><td>Service Data Unit - your actual data</td></tr>
<tr>
<td>PDU</td><td>Protocol Data Unit - one L2CAP packet</td></tr>
<tr>
<td>Credits</td><td>Flow control tokens - one credit = one PDU allowed</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[The Pendant That Refused to Die]]></title><description><![CDATA[I paid 300 dollars for a paperweight.
Well, not intentionally.
A few days before the Meta acquisition announcement, I ordered a Limitless Pendant. Payment processed. Shipping confirmed. Then, before it ever arrived, everything changed. The company an...]]></description><link>https://blog.mohsin.xyz/the-pendant-that-refused-to-die</link><guid isPermaLink="true">https://blog.mohsin.xyz/the-pendant-that-refused-to-die</guid><category><![CDATA[Limitless ]]></category><category><![CDATA[Wearable Technology]]></category><category><![CDATA[Omi AI]]></category><dc:creator><![CDATA[Mohammed Mohsin]]></dc:creator><pubDate>Mon, 22 Dec 2025 15:01:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766412674783/77d5b73f-25e7-498f-b099-585323af6e30.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I paid 300 dollars for a paperweight.</p>
<p>Well, not intentionally.</p>
<p>A few days before the Meta acquisition announcement, I ordered a Limitless Pendant. Payment processed. Shipping confirmed. Then, before it ever arrived, everything changed. The company announced geoblocking in multiple countries, major compliance changes, a halt on hardware sales, and a shutdown timeline for the app.</p>
<p>My pendant was still somewhere over the Atlantic, already obsolete.</p>
<p>For most people, that would have been the end of the story. Eat the cost. Move on. Maybe write an angry tweet.</p>
<p>But I work at Omi. We had been adding support for every major wearable audio device on the market. Limitless was the last holdout. Overnight, thousands of users in geoblocked countries were searching for alternatives. Many did not want their voice data feeding a new ecosystem. Many already owned hardware they trusted and did not want to throw away.</p>
<p>If I could crack the protocol, those users would get a new home. And my not yet arrived paperweight would get a second life.</p>
<p>There was just one problem.</p>
<p>I did not have the device.</p>
<h2 id="heading-working-blind">Working Blind</h2>
<p>What I did have was a friend in London.</p>
<p>He owned a Limitless Pendant, still had full access to the official app, and happened to be on a business trip. When this all started, he was in the back of an Uber, on the way to the airport, heading back to the United States.</p>
<p>When I asked if he could run a few scripts, he did not hesitate. He opened his laptop in the car.</p>
<p>Over the next 24 hours, he became my Bluetooth lab, my QA team, and my reality check. Everything I learned came through him. I would write scripts, send them over, he would run them, and I would stare at logs trying to reconstruct what was happening five thousand miles away.</p>
<p>I had reverse engineered enough BLE audio devices to know where to start. I wrote a small Python script using <code>bleak</code> to scan everything the pendant advertised: services, characteristics, properties. He ran it from the Uber.</p>
<p>The logs came back clean.</p>
<pre><code class="lang-text">Service: 632de001-604c-446b-a80f-7963e950f3fb
  Characteristic: 632de002-604c-446b-a80f-7963e950f3fb
    Properties: ['write', 'write-without-response']
  Characteristic: 632de003-604c-446b-a80f-7963e950f3fb
    Properties: ['notify']
</code></pre>
<p>Three sequential UUIDs. Classic BLE architecture. <code>...02</code> for sending commands to the device. <code>...03</code> for receiving data back. Nothing exotic. This was workable.</p>
<p>I wrote another script to connect and capture every packet the pendant emitted. Between traffic lights, airport Wi-Fi, and boarding announcements, he kept running commands. After a lot of back and forth, reconnects, Bluetooth restarts, and button presses, we got stable connections.</p>
<p>But no audio.</p>
<p>The pendant just sat there. LED dark. Completely silent.</p>
<h2 id="heading-the-missing-piece">The Missing Piece</h2>
<p>I tried everything. Different command sequences. Different timing. Different connection orders. Nothing worked.</p>
<p>So I stopped trying to talk to the pendant and started listening to what the official app was saying to it.</p>
<p>I walked my friend through enabling developer options on his Android phone. He used the pendant normally while capturing HCI logs, the raw Bluetooth traffic between phone and device. He did this several times, restarting between runs.</p>
<p>By then, he was at the airport.</p>
<p>By the time his flight boarded, I had a stack of packet captures in my inbox.</p>
<p>I loaded them into Wireshark and compared sessions side by side. Hex dumps blur together after a while. Then something stood out.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766074620339/ad1c42fc-427d-4e20-bd23-e23e4ee04f62.png" alt class="image--center mx-auto" /></p>
<p>One packet appeared in every capture. Same structure. Same position in the handshake. Every time.</p>
<pre><code class="lang-plaintext">32 07 08 c1 97 c6 c2 af 33
</code></pre>
<p>I did not immediately recognize the format. I dumped a batch of packets into Claude and asked what it could identify.</p>
<p>Those look like protobuf field tags.</p>
<p>Of course.</p>
<p>I decoded it by hand.</p>
<ul>
<li><p><code>32</code> means field 6, length delimited</p>
</li>
<li><p><code>07</code> means seven bytes follow</p>
</li>
<li><p><code>08</code> means nested field 1, varint</p>
</li>
<li><p><code>c1 97 c6 c2 af 33</code> is a varint encoded value</p>
</li>
</ul>
<p>The decoded value was <strong>1765102684161</strong>.</p>
<p>A Unix timestamp in milliseconds. December 7, 2025.</p>
<p>The app was telling the pendant what time it was.</p>
<p>That was the missing piece.</p>
<p>The pendant timestamps all audio internally. Without knowing the current time, it literally cannot record. It was not broken. It was not locked down. It was waiting.</p>
<p>I wrote the encoder in about thirty seconds.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">encode_set_current_time</span>(<span class="hljs-params">timestamp_ms: int</span>) -&gt; bytes:</span>
    time_varint = bytes([<span class="hljs-number">0x08</span>]) + encode_varint(timestamp_ms)
    <span class="hljs-keyword">return</span> bytes([<span class="hljs-number">0x32</span>, len(time_varint)]) + time_varint
</code></pre>
<p>I sent the updated script.</p>
<p>By then, he was taxiing for takeoff.</p>
<p>He ran it again after reconnecting.</p>
<p>Then he pressed the button.</p>
<p>The LED turned on.</p>
<p>For the first time, without the official app, the device was recording.</p>
<h2 id="heading-cracking-the-stream">Cracking the Stream</h2>
<p>Recording and getting usable audio are different problems.</p>
<p>Packets were flowing now, even as his plane climbed out of London. The data looked like noise. I needed to identify the codec.</p>
<p>I made the obvious guess first. Opus at 16 kHz. Almost everyone uses it for voice.</p>
<p>Sure enough, certain bytes repeated at consistent offsets: <code>0xb8</code>, <code>0x78</code>, <code>0xf8</code>.</p>
<p>I pulled up RFC 6716, the Opus specification. These were TOC bytes. Table of Contents. They encode frame configuration, mono or stereo, and frame count. <code>0xb8</code> decodes to config 23, mono, single frame. Exactly what you would expect from a wearable microphone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766075234122/8231ddab-1671-482b-b6a7-d13db0919cb5.png" alt class="image--center mx-auto" /></p>
<p>The pendant was encoding 20 ms Opus frames and wrapping each one in a protobuf message.</p>
<p>I extracted a few dozen frames, stitched them together, wrote them to an Ogg file, and hit play.</p>
<p>My friend’s voice came through the speakers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766415497192/6deffe68-285b-4db9-9844-418adba2773a.png" alt class="image--center mx-auto" /></p>
<p>Spectrogram of the first successful extraction. That's speech.</p>
<p>A little crackly at the transitions, but unmistakably real. Recorded on a device I had never touched, using a protocol I had learned existed only hours earlier.</p>
<h2 id="heading-ship-it">Ship It</h2>
<p>For anyone keeping score, the full stack looked like this.</p>
<pre><code class="lang-text">┌─────────────────────────────────────────────┐
│ Application Layer (Opus audio frames)       │
├─────────────────────────────────────────────┤
│ Message Layer (Protobuf fields)             │
├─────────────────────────────────────────────┤
│ Fragment Layer (sequence and count)         │
├─────────────────────────────────────────────┤
│ BLE GATT (TX: ...02 / RX: ...03)            │
└─────────────────────────────────────────────┘
</code></pre>
<p>Once the Python proof of concept worked, I ported everything to Dart for our Flutter app. I pushed a build to TestFlight and sent it to a handful of Limitless users who had reached out.</p>
<p>It works.</p>
<p>My recordings show up.</p>
<p>I can export them.</p>
<p>It is real.</p>
<p>By the time my collaborator landed in the United States, the protocol was cracked.</p>
<p>We pushed it live to the App Store the next day.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766416948178/bc84a9c3-9a5b-42a0-a7e4-bb4d17a5a9c0.png" alt class="image--center mx-auto" /></p>
<p>Omi traffic after Limitless support went live.</p>
<p>No encryption was bypassed. No proprietary code was decompiled. The protocol uses standard BLE, standard Protobuf, and standard Opus. All open specifications.</p>
<p>This was pattern recognition, persistence, and one very patient collaborator with a device I could not touch.</p>
<p>My pendant finally arrived three days later.</p>
<p>By then, it already worked with Omi.</p>
<p>Screenshots and code samples have been simplified for clarity. The actual implementation is in <a target="_blank" href="https://github.com/BasedHardware/omi/pull/3641">the PR.</a></p>
]]></content:encoded></item><item><title><![CDATA[How I got into GSoC 2024]]></title><description><![CDATA[What is GSoC?
If you are reading this, then chances are you already know what Google Summer of Code is and how it works. You might want to check out their official site for the perfect explanation if you don't. In brief, GSoC is an open-source progra...]]></description><link>https://blog.mohsin.xyz/how-i-got-into-gsoc-2024</link><guid isPermaLink="true">https://blog.mohsin.xyz/how-i-got-into-gsoc-2024</guid><category><![CDATA[gsoc]]></category><category><![CDATA[gsoc2024]]></category><category><![CDATA[Google summer of code]]></category><dc:creator><![CDATA[Mohammed Mohsin]]></dc:creator><pubDate>Tue, 31 Dec 2024 07:22:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735576551038/6d087f4d-80ec-4c32-ba4d-d3bd249b53da.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-gsoc"><strong>What is GSoC?</strong></h2>
<p>If you are reading this, then chances are you already know what Google Summer of Code is and how it works. You might want to check out their <a target="_blank" href="https://summerofcode.withgoogle.com/">official site</a> for the perfect explanation if you don't. In brief, GSoC is an open-source program powered by Google that connects organizations to students and professionals to work on their projects over the summer. This is an amazing opportunity for students as they are exposed to real-world code and also get first-hand experience working on production-level projects. And for working professionals, it is an opportunity to dive into the world of open source by making meaningful contributions to open-source projects.</p>
<p>GSoC is not an internship but more of a program to bring people into open-source development. You are, of course, paid decently for your work, which is an additional perk. The stipend varies from country to country and is listed on their website. There are three types of projects - small (~90 hours), medium (~175 hours), and large (~350 hours), categorized based on the time span contributors need to dedicate to that project.</p>
<h2 id="heading-the-journey"><strong>The Journey</strong></h2>
<p>Back in January 2024, I was preparing for Summer of Bitcoin. I didn’t know that GSoC was now open to everyone (it was not restricted to just students) and I wasn’t planning to do my master's either, so I did not think about it at all. This was my last chance for Summer of Bitcoin, and I really wanted to be a part of it.</p>
<p>Fast forward to March, and I was trying to work on the Summer of Bitcoin assessment that had to be done as part of the proposal submission. I came across a post about GSoC, looked it up, and thought, why not give it a chance one last time? I had a few contacts who had been part of GSoC in the past. I reached out to them with a few questions and stuff. It was around March 20, and there were only about 12 days left to submit the proposal. The SoB deadline was already approaching, but the assessment wasn’t easy, so I thought of doing it later (spoiler: I did not do it at all).</p>
<p>I knew by now that many people would have contributed and become comfortable with the projects and the community. I did not have any advantage over them (except that maybe I had some industry experience). I was planning to apply for projects under the CCExtractor organization, so I completed their required introductory tasks like <a target="_blank" href="https://github.com/mdmohsin7/myfitnesspal-grafana">creating a Grafana dashboard for MyFitnessPal data</a> and creating a <a target="_blank" href="https://github.com/CCExtractor/ccextractorfluttergui/pull/65">macOS release for CCExtractor</a>. But I still was not confident that I would make the cut, so I started looking for other organizations and came across <a target="_blank" href="https://aossie.org/">AOSSIE</a>.</p>
<p>AOSSIE had many projects, but I found one project particularly interesting because no one was able to run it correctly at all since the codebase was very old, and it was a native Android app with a Flutter module. That project was <a target="_blank" href="https://github.com/AOSSIE-Org/Monumento">Monumento</a>. I spent the next 2-3 days trying to migrate it to a complete Flutter app, fixing many native Android issues, and also the AR functionality. In about 3 days, I had the app fully functional and working with my Firebase. Then an organization admin gave me access to their Firebase account to set this project correctly to use their account with all the data already there.</p>
<p>The deadline was approaching, and I had about 3-4 days to create a proposal, get it reviewed by a mentor, and submit it. I spent the entire day going through many old proposals to understand how everyone did it and what makes a proposal stand out. The very next day, I started working on it and made my first draft. I sent it to my project mentor, and he gave some feedback. I quickly started making changes based on his feedback. His feedback was basically to propose some new features alongside the UI redesign, so I spent some time creating MVPs of the features I was planning and also almost finished the UI design in Figma.</p>
<p>I spent quite a lot of time in the last 2-3 days to make my proposal stand out from the crowd. I submitted it on the deadline day, waited for a month, and when the results came, I was ready to accept my rejection (like in previous years), but it was a selection. It was one of the happiest days for me.</p>
<h2 id="heading-everyones-journey-is-different"><strong>Everyone’s Journey is Different</strong></h2>
<p>I've read many blogs where people share their GSoC selection stories, and most of them mention starting their preparation in December of the previous year. While that's the ideal way to begin, don't lose hope if you're starting a bit late. This was the only reason for me to write this blog and share my story. Remember, you haven't lost until the results are announced. If you quit early, you've lost before the results even come out. Believe in yourself, and you’ll succeed! :)</p>
<h2 id="heading-some-tips"><strong>Some Tips</strong></h2>
<ul>
<li><p>Don't do it just for the money :)</p>
</li>
<li><p>Your proposal determines your outcome, so make it as detailed as possible and get it reviewed early.</p>
</li>
<li><p>It's never too early or too late to start. You can even begin just a day before the deadline (but please don't).</p>
</li>
<li><p>Don't focus on too many projects. Limit yourself to a maximum of 2-3 projects so you can dedicate time to each.</p>
</li>
<li><p>Quality over quantity. Focus on making meaningful contributions.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Building Scalable Apps for Low Connectivity Areas]]></title><description><![CDATA[Introduction
The majority of Cattleguru's audience resides in villages and rural areas in Northern India. While all major cities already have 5G and many tier 2 cities are on 4G, rural areas still lag behind. Creating an e-commerce app for these regi...]]></description><link>https://blog.mohsin.xyz/building-apps-for-low-connectivity-areas</link><guid isPermaLink="true">https://blog.mohsin.xyz/building-apps-for-low-connectivity-areas</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Redis]]></category><category><![CDATA[caching strategies]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Dart]]></category><category><![CDATA[Startups]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mohammed Mohsin]]></dc:creator><pubDate>Fri, 14 Jun 2024 16:18:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1718362852941/1cc95c52-74dd-4cf9-bb20-ca25c02d8396.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>The majority of Cattleguru's audience resides in villages and rural areas in Northern India. While all major cities already have 5G and many tier 2 cities are on 4G, rural areas still lag behind. Creating an e-commerce app for these regions is quite challenging due to limited connectivity. Additionally, people often avoid updating the app because of limited internet data. This makes it difficult to make online-only apps. Furthermore, the loading time for fetching data from the backend and displaying it in the app can be significant, sometimes causing users to wait for more than 10-15 seconds to perform any operation.</p>
<h2 id="heading-background">Background</h2>
<p>We needed a solution to:</p>
<ul>
<li><p>Reduce latency for all sorts of operations originating from our Apps</p>
</li>
<li><p>Keep App Data in Sync with Remote Database</p>
</li>
</ul>
<p>Initially, we didn't use any specific solution; the app simply fetched data from Firestore and displayed it. Given that Firestore caches data locally, we didn't consider alternatives until we observed real-world app usage. We noticed that users weren't using the app frequently; instead, they placed orders through our salesmen. This was because most of the time, they saw the app taking anywhere between 20-30 seconds to successfully place an order. Hence, they found it easier to just call the salesmen and place the order. As we expanded into more remote villages, our delivery partners couldn't effectively use the internal team app to deliver orders due to low connectivity in those remote villages. Therefore, we needed a more robust and efficient solution to improve operations.</p>
<h2 id="heading-approaches">Approaches</h2>
<p>There are various approaches we could have taken to resolve our issues. One approach was using CRDTs (Conflict-Free Replicated Data Type). This would have made sense if our apps were write-heavy and collaborative. While we considered it initially, we discarded it because our apps are mostly read-heavy, and only the internal team app is somewhat write-heavy. You might wonder why we didn't use CRDTs for the team app since it is write-heavy. Although it is both read and write-heavy, there are other approaches with less overhead compared to CRDTs, which make more sense because we wouldn't fully utilize CRDTs even if we implemented them.</p>
<p>Another approach was to make our apps offline-first with sync services like <a target="_blank" href="https://electric-sql.com/">Electric SQL</a> or <a target="_blank" href="https://www.powersync.com/">PowerSync</a>. However, the problem here is that we don't use Postgres (yet), so migrating from Firestore to PostgreSQL would be a time taking task. Additionally, Electric SQL is not yet stable and has many limitations at present, while PowerSync isn't open source yet.</p>
<p>Another approach is to cache whatever we can and move all the time-consuming operations to the cloud (backend). We chose this approach. After quite some research, we decided to follow a three-layered caching strategy coupled with feature flags and message queues for our systems.</p>
<h2 id="heading-the-chosen-approach">The Chosen Approach</h2>
<p>If you want to reduce the latency of fetching frequently accessed and infrequently changed data, caching is one of the best solutions with very little overhead. That's exactly what we did by adding caching at various levels. We have a client-side database, an in-memory (non-persistent) database at the server level, and an adjacent Redis instance. A call is made to Firestore only if the data does not exist at the previous three levels.</p>
<p>Our SKUs don't change frequently, and if there is a change, it's mostly in the price. So, it made sense for us to cache the product data on the client side for the long term. Whenever the app opens, it makes one API call to fetch the last cache update time and feature flags update time and compares it locally. To minimize the amount of data sent to the client, we only return the cache keys and feature flag keys that have changed. This might not seem like much, but it helps save a few bytes of data.</p>
<p>When it comes to our internal team app, the order data changes every time a new order is received. Sometimes we don't receive orders for hours. Therefore, it doesn't make sense to have a fixed expiry for the cache while updating it at specific intervals, especially since we don't receive orders between 1 AM and 6 AM at all. Instead, we decided to use a cache with no expiry. Rather than updating both the in-memory cache and Redis with every new order, we opted to use Cron Jobs to update them at regular intervals. The Cron Jobs are scheduled to run only during the working hours of our teams. For those curious, we use <a target="_blank" href="https://upstash.com/">Upstash</a> Redis and QStash for this purpose.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718363918368/8b258f1c-a1c9-4425-8277-41734720f80d.png" alt class="image--center mx-auto" /></p>
<p>To minimize the amount of data written by every write operation of the team app, we redesigned the feature flows to reduce as much data as possible while ensuring enough data is captured. Since we already know who our delivery partners are and the customers who placed the orders, we eliminated all unnecessary fields being transferred to the backend, using only one field as a strong identifier of the user.</p>
<p>One very common problem with having caches at multiple levels is keeping all of them in sync with the main database. This is a big issue if the data changes frequently. But given the data that is cached locally on the client side doesn't change that frequently, it isn't that big of an issue for us. We use flags to keep track of what data was changed and when it changed. It helps us in keeping the data between the client and the backend in sync.</p>
<p>The table below shows how much the response time has improved, thanks to the approaches we implemented.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Metric</td><td>Response Time Now (ms)</td><td>Response Time Before (ms)</td></tr>
</thead>
<tbody>
<tr>
<td>MAX</td><td>5000</td><td>30000</td></tr>
<tr>
<td>99TH PERC</td><td>4975</td><td>29600</td></tr>
<tr>
<td>90TH PERC</td><td>4750</td><td>26000</td></tr>
<tr>
<td>50TH PERC</td><td>3750</td><td>5000</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Setting Up Core Lightning Node on an Ampere VM (OCI): A Comprehensive Guide]]></title><description><![CDATA[Core Lightning (previously c-lightning) is a lightweight, highly customizable, and standard-compliant implementation of the Lightning Network protocol. While the project offers extensive documentation, there is a scarcity of articles guiding beginner...]]></description><link>https://blog.mohsin.xyz/setting-up-core-lightning-node-on-an-ampere-vm</link><guid isPermaLink="true">https://blog.mohsin.xyz/setting-up-core-lightning-node-on-an-ampere-vm</guid><category><![CDATA[Bitcoin]]></category><category><![CDATA[lightning network]]></category><category><![CDATA[Oracle Cloud]]></category><dc:creator><![CDATA[Mohammed Mohsin]]></dc:creator><pubDate>Sat, 10 Feb 2024 16:08:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707501215040/4de46860-ce3a-431f-928a-11ac30d2e9e2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Core Lightning (previously c-lightning) is a lightweight, highly customizable, and standard-compliant implementation of the Lightning Network protocol. While the project offers extensive documentation, there is a scarcity of articles guiding beginners through the setup process. In this tutorial, I will outline the steps to successfully run a Core Lightning Node on an Ampere (Arm-based) VM with 8GB RAM and 4 OCPUs. This guide is tailored for developers aiming to set up a Lightning Node for development purposes.</p>
<p><strong>Note:</strong> You will have to be on the "Pay as you go" plan of Oracle Cloud Infrastructure (OCI) for you to be able to create an Ampere VM (with upto 24GB RAM and 4OCPUs for free).</p>
<h3 id="heading-vm-setup"><strong>VM Setup</strong></h3>
<p>Spin up an Ampere VM with at least 4GB of RAM (I opted for 8GB) through OCI's instance creation UI. Make sure to select Ubuntu 20.04 or later as the base image.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3y9foarht2n0r89v2m34.png" alt="OCI Instance Creation" /></p>
<p><strong>Important:</strong> Make sure you download the private as well as the public keys (you won't be able to SSH into the VM later on if you don't have the private key).</p>
<p>The newly created instance should be up and running in a minute or two.</p>
<h3 id="heading-ssh-into-vm"><strong>SSH into VM</strong></h3>
<p>Open a new terminal on your local machine and change the permissions for the private key by running the following command:</p>
<pre><code class="lang-bash">chmod 400 /path/to/key/key_name.key
</code></pre>
<p>After changing the permissions, SSH into the VM by running the following command:</p>
<pre><code class="lang-bash">ssh username@&lt;vm-ip-address&gt; -i /path/to/key/key_name.key
</code></pre>
<p>You can find the username and public IP of your VM in the VM Instance details page on OCI.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgasea37g82osyk3u09q.png" alt="OCI VM Instance Details" /></p>
<h3 id="heading-installing-core-lightning-approaches"><strong>Installing Core Lightning (Approaches)</strong></h3>
<p>There are three ways to install Core Lightning</p>
<ol>
<li><p>Installing pre-compiled binaries</p>
</li>
<li><p>Using Docker</p>
</li>
<li><p>Building Binaries from source</p>
</li>
</ol>
<p>Since we are on an Arm based VM, we won't be able to follow the first approach because Core Lightning only provides pre-compiled binaries for AMD-based Fedora and Ubuntu distributions, which can be found <a target="_blank" href="https://github.com/ElementsProject/lightning/releases">here</a>.</p>
<p>We will be following the third approach, which involves compiling the binary from source.</p>
<h3 id="heading-installing-dependencies"><strong>Installing Dependencies</strong></h3>
<p>The following commands will download all the required dependencies:</p>
<pre><code class="lang-basic">sudo apt-<span class="hljs-keyword">get</span> update
sudo apt-<span class="hljs-keyword">get</span> install -y \
  autoconf automake build-essential git libtool libsqlite3-dev \
  python3 python3-pip net-tools zlib1g-dev libsodium-dev gettext
pip3 install --upgrade pip
pip3 install --user poetry
</code></pre>
<h3 id="heading-installing-bitcoin"><strong>Installing Bitcoin</strong></h3>
<p>We will not be running our own Bitcoin Core node due to storage constraints (it requires around 400GB of storage). Instead, we will install it minimally:</p>
<pre><code class="lang-bash">sudo apt-get install snapd
sudo snap install bitcoin-core
<span class="hljs-comment"># Snap does some weird things with binary names; you'll</span>
<span class="hljs-comment"># want to add a link to them so everything works as expected</span>
sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/<span class="hljs-built_in">local</span>/bin/
</code></pre>
<h3 id="heading-setting-up-core-lightning"><strong>Setting up Core Lightning</strong></h3>
<p>The process isn't really different from what is in the documentation. So I have copied and pasted the required commands to be followed.</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/ElementsProject/lightning.git
<span class="hljs-built_in">cd</span> lightning
git checkout v23.11.2
sudo apt-get install -y valgrind libpq-dev shellcheck cppcheck \
  libsecp256k1-dev jq lowdown
</code></pre>
<h3 id="heading-building-core-lightning"><strong>Building Core Lightning</strong></h3>
<pre><code class="lang-bash">pip3 install --upgrade pip
pip3 install mako
pip3 install -r plugins/clnrest/requirements.txt
pip3 install grpcio-tools
./configure
make
sudo make install
</code></pre>
<p>By this point, you should have the lightning and Bitcoin binaries set up correctly. If you attempt to run the command <code>lightningd</code>, you'll probably get some error stating that there's no Bitcoin node running. To fix this, we have to connect to a Bitcoin node. For this, we can either run our own Bitcoin node (which we already have installed) or use a third-party plugin. Since running our own node requires an upwards of around 400GB of storage, we will use a third-party plugin.</p>
<p>There are various plugins; for this tutorial, we will go with <a target="_blank" href="https://github.com/lightningd/plugins/tree/master/sauron">sauron</a>. You can check the rest of the plugins <a target="_blank" href="https://docs.corelightning.org/docs/bitcoin-core#connecting-to-bitcoin-core-remotely">here</a>.</p>
<h3 id="heading-setting-up-sauron"><strong>Setting Up Sauron</strong></h3>
<p>Core Lightning supports a plugin manager called <code>reckless</code>, which simplifies the installation and uninstallation of plugins with a single command. To install Sauron, execute the following command:</p>
<pre><code class="lang-bash">reckless install sauron
</code></pre>
<p>After running the above command, the sauron plugin will be downloaded in the <code>~/.lightning/reckless/sauron</code> directory.</p>
<blockquote>
<p>Reckless currently supports python plugins only</p>
</blockquote>
<h3 id="heading-running-core-lightning-node"><strong>Running Core Lightning Node</strong></h3>
<p>To run the node on the testnet, execute the following command:</p>
<pre><code class="lang-bash">lightningd --testnet --disable-plugin bcli --plugin --plugin ~/.lightning/reckless/sauron/sauron.py --sauron-api-endpoint https://blockstream.info/testnet/api/
</code></pre>
<p>Congratulations! You now have a successfully running Lightning node with a third-party plugin acting as a Bitcoin node. You can interact with your node using <code>lightning-cli</code>.</p>
<p>Open a new terminal, SSH into the VM again, and run the following command to confirm whether you are able to interact with your node or not:</p>
<pre><code class="lang-bash">lightning-cli --testnet getinfo
</code></pre>
<p>You can check the various commands provided by <code>lightning-cli</code><a target="_blank" href="https://docs.corelightning.org/reference/get_list_methods_resource#:~:text=JSON%2DRPC%20API%20REFERENCE">here</a>.</p>
]]></content:encoded></item></channel></rss>