Fazal Majid's low-intensity blog

Sporadic pontification

Fazal Fazal

Fiber for your home network

TL:DR Fiber as the backbone of your home network is easier than you’d think

My apartment, like many, is elongated. The living room is on one end, the bedrooms (one of which is my home office), on the other side. This makes it hard to cover both sides with a single WiFi access point, or to have uniform Internet access speed on the wired network. I have a semi-pro Ubiquiti UniFi network of WiFi access points and switches, which makes it realtively easy, but only if you have good backhaul connectivity between the APs.

For the longest time I used G.hn powerline networking bridges made by Devolo. Unfortunately, powerline is at least as unreliable as wireless networking and this made for frequent brown-outs requiring unplugging the Devolo Magic 2 boxes to power-cycle them. I know Devolo doesn’t make the actual PLC controllers and their firmware, probably made by Broadcom from the MAC addresses, but surely they could implement something as simple as a watchdog timer to reboot the PLC if no heartbeats are seen in a while?

At any rate, at some point I flipped the bozo switch on the Devolos and decided it was long past time to install proper Ethernet across the two halves of the apartment, which is easier said than done in a rental. To make things more complex, my Internet connection that used to be absymal Vodafone/BT OpenReach VDSL ending in my office was upgraded to a fiber ISP but this terminated by the door in half-way no-man’s land.

This was around the time I was experimenting with 10G Ethernet in the core of my home network, using Ubiquiti’s relatively inexpensive (for the time) USW-Aggregation switches with 8 SFP+ 10G ports. Speaking of which, while you can buy 10GBase-T SFP+ modules allowing you to use copper 10G Ethernet like the one on my Mac Studio, their power draw exceeds the specs of the SFP+ standard and they are unreliable, stick with fiber or use a switch with actual 10G ports (in my case a ZyXEL XGS1250-12, although it has an unfortunate tendency to overheat).

Contrary to what you may think, multimode fiber is much thinner (thus more discreet) and far more flexible than copper Ethernet cable (fiber above in the picture, copper below).

Fiber and copper cable compared

I conceived the idea of running a 30m pre-terminated fiber cable, made by a French company, as it turns out, along the crown molding in the ceiling, held in place with transparent plastic 3M Command hooks originally meant for holding Christmas lights, and easily removed without damage to the paintwork (this is a rental, remember).

Fiber cable on the ceiling

I had to run it along a snaking route in red to stay along the crown molding, but even with my tyro DIY skills it only took a couple of hours to set up and is barely visible unless you know to look for it. While I don’t actually have any 10G devices in my living room yet, I do have a WiFi7 access point and it won’t be bottlenecked by the Ethernet network.

Floor plan

I still have a Devolo link between my office and the AP in my bedroom, but that’s a much shorter distance and much less unreliable.

PSA: LinkedIn single-sign-on dangers

I have a work-issued computer that I keep rigorously separate from my personal stuff. It belongs to my employer and thus I do not keep personal files on it, or access personal email and certainly don’t save personal passwords on it. I even have it on a separate VLAN on my home network.

This is why I was horrified when I went to the LinkedIn website on my work computer (to look at a colleague’s posting) and it automatically started a single sign-on with my company’s GMail (my work address is of course linked to my LinkedIn profile).

This means a company with Google Apps can potentially access your LinkedIn account without your permission. Considering LinkedIn’s past record of egregious security failures1, it shouldn’t be too surprising, but still…

I couldn’t find any setting to disable SSO, and it seems the only way to prevent this is to turn on two-factor authentication (where the only options are the grossly insecure phone SMS text message method or the equally phishable TOTP Authenticator app codes, not the actually secure Webauthn/FIDO U2F USB keys).


  1. A colleague had built a GPU mining rig for fun and profit, and run the LinkedIn hashed password dump through it using hashcat. He found Donald Trump’s was a variation on “You’re fired!”… ↩︎

Funding the vetting of the Software Supply-Chain

TL:DR A way out of our software supply-chain security mess

As memorably illustrated by XKCD, the way most software is built today is by bolting together reusable software packages (dependencies) with a thin layer of app-specific integration code that glues it all together. Others have described more eloquently than I can the mess we are in, and the technical issues.

XKCD

Crises like the log4j fiasco or the Solarwinds debacle are forcing the community to wake up to something security experts have been warning about for decades: this culture of promiscuous and undiscriminating code reuse is unsustainable. On the other hand, for most software developers without the resources of a Google or Apple behind them, being able to leverage third-parties for 80% of their code is too big an advantage to abandon.

This is fundamentally an economic problem:

  • To secure a software project to commercial standards (i.e. not the standards required for software that operates a nuclear power plant or the NSA’s classified systems, or that requires validation by formal methods like TLA+), some form of vetting and code reviews of each software dependency (and its own dependencies, and the transitive closure thereof) needs to happen.
  • Those code reviews are necessary, difficult, boring, labor-intensive, require expertise and somebody needs to pay for that hard work.
  • We cannot rely entirely on charitable contributions like Google’s Project Zero or volunteer efforts.
  • Each version of a dependency needs to be reviewed. Just because version 11 of foo is secure doesn’t mean a bug or backdoor wasn’t introduced in version 12. On the other hand, reviewing changes takes less effort than the initial review.
  • It makes no sense for every project that consumes a dependency to conduct its own duplicative independent code review.
  • Securing software is a public good, but there is a free-rider problem.
  • Because security is involved, there will be bad actors trying to actively subvert the system, and any solution needs to be robust to this.
  • This is too important to allow a private company to monopolize.
  • It is not just the Software Bill of Materials that needs to be vetted, but also the process. Solarwinds was probably breached because state-sponsored hackers compromised their Continuous Integration infrastructure, and there is Ken Thompson’s classic paper on the risks of Trusting Trust (original ACM article as a PDF).
  • Trust depends on the consumer and the context. I may trust Google on security, but I certainly don’t on privacy.

I believe the solution will come out of insurance, because that is the way modern societies handle diffuse risks. Cybersecurity insurance suffers from the same adverse-selection risk that health insurance does, which is why premiums are rising and coverage shrinking.

If insurers require companies to provide evidence that their software is reasonably secure, that creates a market-based mechanism to fund the vetting. This is how product safety is handled in the real world, with independent organizations like Underwriters Laboratories or the German TÜVs emerging to provide testing services.

Governments can ditch their current hand-wavy and unfocused efforts and push for the emergence these solutions, notably by long-overdue legislation on software liability, and at a minimum use their purchasing power to make them table stakes for government contracts (without penalizing open-source solutions, of course).

What we need is, at a minimum:

  • Standards that will allow organizations like UL or individuals like Tavis Ormandy to make attestations about specific versions of dependencies.
  • These attestations need to have licensing terms associated with them, so the hard work is compensated. Possibly something like copyright or Creative Commons so open-source projects can use them for free but commercial enterprises have to pay.
  • Providers of trust metrics to assess review providers. Ideally this would be integrated with SBOM standards like CycloneDX, SPDX or SWID.
  • A marketplace that allows consumers of dependencies to request audits of a version that isn’t already covered.
  • A collusion-resistant way to ensure there are multiple independent reviews for critical components.
  • Automated tools to perform code reviews at lower cost, possibly using Machine Learning heuristics, even if the general problem can be proven the be computationally untractable.

The fetish for uptime

At one of my previous jobs, the engineers on my team had an informal competition as to who could rack up the longest uptime on their workstation (they all had Sun Solaris or Linux, of course). When the company moved to a new office, one crafty engineer managed to beat all the others by putting his Sun into the seldom-used hibernation mode to preserve his uptime when everyone else was forced to reboot.

I posit that uptime is actually a bad thing. All software has bugs, and a regular maintenance schedule to apply patches, at the very least once a month, should be part of the plan and designed into the architecture. By that token, an uptime greater than 31 days is a “code smell” for infrastructure.

PSA: iCloud Private Relay can make Safari on your iPad unusable

After upgrading my iPad to iPadOS 15.5, Safari became unusable. It would take forever to load the Reddit login page, and many others like Dilbert.com. Opening the same in Firefox Focus had no issues.

Going into Settings / Safari / Privacy & Security / Hide IP Address and disabling it fixed this for me. Alternatively you can disable it only for specific networks (Settings / Wi-Fi / ⓘ / Limit IP Address Tracing / Off).

It seems Apple turned on iCloud Private Relay on by default for Safari in iPadOS 15.5 and presumably iOS 15.5 as well. Macs are probably next.

I can only speculate why turning it off fixes the breakage, but:

  • The feature routes your calls through Akamai then CloudFlare, and for whatever reason CloudFlare doesn’t seem to like my ISP, I often encounter their “prove you are human” challenges.
  • It may also be because Apple overrides your DNS settings for this feature to work, and if your network is locked down with something like Pi-Hole to prevent trackers, those DNS requests may not be getting through. I don’t want IoT devices or the like to bypass my DNS server, which uses Wireguard to my Cloud VPN server to ensure my ISP cannot snoop on my DNS requests (a setup I believe more secure and private than Apple’s), nor CloudFlare, nor the UK Police State. I haven’t blocked DNS-over-HTTPS servers yet as this guy does but it’s on my list. This might be interfering with iCloud Private Relay.
  • It may also be sabotage, as Rui Carmo points out, or as John Oliver memorably calls it, “Cable Company F∗∗∗ery”.