Browsed by
Month: January 2023

Comcast 2gig/200M Service Upgrade Issues

Comcast 2gig/200M Service Upgrade Issues

I recently upgraded my home Comcast internet service from 1gig/40meg to their newest 2gig/200meg service. This is a significant upgrade for Comcast as it requires some upgrades to field installed upstream amps to allow a greater amount of upstream bandwidth. I was surprised to see that the hardware work was completed in my area, but happy to give it a try.

One oddity of the service is that Comcast requires you to run their cable modem if you want to get the 2gig/200m service combined with the unlimited traffic. They specifically mention it during the sign up, and specifically mention that if you switch to your own modem that that the service will downgrade back to the 1gig service with a 1.2TB upload cap.

A day after ordering the service a new XB8 modem arrived. I am running a Ubiquity UDM Pro firewall, which has 10G SFP+ interfaces for both LAN and WAN interfaces. In my case the LAN side is connected to a Ubiquity 10G core switch that has fiber drops to other wiring closest in my house, as well as 10G connections to my office machines and 40G connections to the server racks.

The XB8 modem has a single copper 2.5G capable port (marked with the red bar), and most new 10G-BaseT SFP+ modules will down-convert to 2.5G wire speeds. I connected up the 2.5G port from the cable modem to my UDMP via an SFP+ module, and the Comcast side indicated a 2500 mbit connection. Interestingly on the UDMP software side the interface appears as a 10G interface because the SFI interface from the device to the SFP+ module is still running at 10G. I’d be curious to dig into that a bit more to see what that module does for speed conversion and throttling. With the connection up and running, and with the cable modem in the default NON-BRIDGE mode, my firewall got a 10.0.0.4 address as I would expect. Some quick speed tests from my office machine gave me acceptable >2 gig download speeds, and 100-200mbit upload speeds. Upload speeds were more volatile in part because of the end nodes for the speed test being saturated, but still overall acceptable results.

Given my use case I then switched the XB8 over to BRIDGE mode. It is a simple toggle in the Comcast interface (address 10.0.0.1) in which the modem appears to restart. After restart the UDMP got an external Comcast address (24.x.x.x) over DHCP, and internet connectivity seems great.

A few hours later I noticed the UDMP was saying the ‘internet’ connection was not working, and sure enough if I did a ping of 1.1.1.1 it would drop 90% of the packets. You could still open some web pages due to the amazing resilience of TCP but there was clearly something causing lots of packet drops.

My first thought was that the 2.5G interface might be having problems. The 10GBaseT SFP+ module I was using was quite old, so perhaps that 2.5G support was not perfect. As soon as I unplugged the connection from the SFP+ and put it into the UDMPros 1G only copper port the internet connectivity was restored and looked perfect. Ok – Time to order up a new 10GT SFP+.

5 mins later, the same internet drop happened again. I unplugged the drop and plugged it back in and it instantly fixed it. Hmm. That is odd… Is it exactly 5 mins? Stopwatch out – Yep, exactly 5 mins from re-plug-in the problem happens. Reseting the interface fixes it for 5 mins more, so my first thought was something with the UDMP interface. I reroute the UDMP interface through the core router on a dedicated VLAN so the newer core router would be the connection point for the Comcast connection, but it makes not difference. If I soft down the interface and bring it back up on the Comcast side now it doesn’t fix anything, but if I do it on the UDMP side it does. Interesting!

I did notice the problem didn’t start until I switched to BRIDGED mode, so I tried switching back out of BRIDGED mode. That fixed it. I can get back to 2.5g and everything is working great. No 5 min dropout. Could BRIDGE mode be the problem?

At this point, the network engineer in me said ‘get a pcap going’. Since I’m running through the core switch I configure another SFP+ port to be a mirror port, and then connect that port to an interface on another laptop that has Wireshark on it.

I switched the cable modem back to BRIDGED mode and started capturing from power on.

My first thought is – When does the UDMP send the ARP requests for the default gateway? Filtering on just DHCP I can see the UDMP getting an IP Address at timestamp 64, and just after that the UDMP does an ARP request for the gateway 24.20.70.1. This looks just as you would expect. One thing that does stick out is that gateways MAC address: IETF-VRRP-VRID_32. That is an indication that the other side is using VRRP (Virtual Router Redundancy Protocol). Not surprising, but a hint at the unusual behavior.

The ARP at 64 seconds is the last before the manual one at 528.

With the ARP at 64 seconds, I start doing some ICMP traffic. At almost exactly 300 seconds later (Timestamp=364s), the ICMPs start to fail. Some other traffic is still working, but this specific ICMP path is failing. Clearly something has timed out, and it seems like perhaps my MAC address has timed out in one of the paths from one of the VRRP member routers.

ICMP working as expected.


ICMP Fails starting at timestamp 364.

I SSH into the UDMP and check the ARP table, which has the entry for 24.20.70.1. I use the ‘arp -d’ command to remove that entry which causes the UDMP to send a new ARP request for 24.20.70.1. I see that traffic, and suddenly the ICMPs start working again!

The ARP at timestamp 528 fixes the ICMP return path.

I took a look at the statistics from this capture session, and you can see the outbound traffic going from the UDMP to the VRRP MAC address, but the return traffic is split between two Junipers routers. ( MAC 0xef and 0x6c). These two return paths are not equal in usage, which is not a surprise since the hashing used to pick in VRRP isn’t going to guarantee an even split in such a small set of connections.

I dug a bit more, and sure enough the ICMPs I was sending were all coming back from one of the two Junipers ( the 0x6C one ). It seems that most of the traffic from that particular Juniper gets dropped if the the ARP request does not happen (and presumably reset a MAC table somewhere along the way) every 5 mins.

The statistics from this small capture session. You can see both outbound and inbound paths.

Looking at the traffic from the 2 src mac for the return path confirms the traffics drops from that second Juniper after the potential MAC timeout.

This return path has normal packet distribution the entire time.
You can see the significant falloff in packets from this return path during the interval from 364-528.

The FIX:

ARP timeout are a bit complex. While there are OS level settings ( in this case you can see them in /proc/sys/net/ for the interface of interest), things like the base_reachable_time_ms don’t provide everything you need. An entry in the ARP cache might not be refreshed via ARP if an upper level protocol updates the status of that entry. That can happen if a packet is received successfully based in the use of an entry. As a result having those OS timers set to something like 30s might not actually result in a new ARP every 30s, especially on an active link. On the UDM I was able to see >20 min intervals between ARPs if traffic is flowing.

Since I need to guarantee a new ARP every 5 mins I created a CRON job that runs every 4 mins and deletes the particular arp entry for the gateway in the internet interface. In my case (the UDMPro) the SFP+ WAN interface is eth9. I added the following command to the crontab:

sudo arp -d `arp -i eth9 | awk ‘BEGIN { FS=”[ ]” } ; NR==1 {print $1 }’`

That command will delete the ARP entry for the first entry in the WAN interface table, which in this case is the default route (because the IP is .1). This is a serious hack, and not something you would want to rely on long term.

It does work, and a week later everything is working great with the ARP refresh. It is a surprising issue perhaps obfuscated by not as many people using non-bridged mode, and because Windows tends to ARP much more often. It is possible that other firewalls have lower ARP thresholds that mitigate this problem. There isn’t a magic ARP timeout that is ‘correct’ and the actual implementations vary a lot.

I talked about this issue at length with my friend Eric Rosenberry, who happens to be Director of Network Architecture at Ziply. He suggested the it could be some layer 2 network layer like EVPN in between the VRRP router and my device. Fortunately he had a few contacts who know a few other contacts that may be able to pass along this interesting problem to some of the client network engineers at Comcast. I’ll update if I hear back.

I did find after a bit of searching that other people have seen this issue with the UDM Pro and Comcast:

https://community.ui.com/questions/UDM-SE-and-Xfinity-XB7-not-working-well-together/a26248d8-aa65-4a0a-9315-3a9ee2f3f751?page=1

https://community.ui.com/questions/ARP-Timeout-on-UDM-Pro-Potential-fix-for-packet-loss-using-Xfinity-Gageway/1744ceb1-279e-4c88-86a9-133f9c6e792c

An old calculator

An old calculator

When I was in middle school my parents got me a Sharp PC-1401 ( a slightly earlier version of this 1403) for Christmas. I had a Sharp EL-506 calculator, and the PC-1401/3 was an extension of that style but included a BASIC language interpreter and a full QWERTY keyboard. It was the precursor of the laptop, and I carried it around every day. I spent quite a bit of time writing basic routines to calculate astronomical ephemeris of various kinds. As you can imagine this behavior was quite the magnet for attention from the ladies.

It is a surprisingly easy to use interface, and it has terrific battery life. I used it for a long time before eventually moving onto the classic HP48. Of course now the watch I wear has 10,000 times the processing power, but for those times, it was something.

Nerd on.

Upgrading to 2G/200M home internet.

Upgrading to 2G/200M home internet.

Interesting results with the newest Comcast offering at home. 2 gig download and 200m upload. The upload seems a little jumpy..of course this is during peak times.

The DOCSIS3.1 modem has a 2.5G ethernet port, but fortunately one of my 10G SFP+ BaseTs supports 2.5G which works in the UDM Pro. From there 10G to everything else.

Power outage and battery system check

Power outage and battery system check

We had a power outage last week that gave me a good change to test out my battery system. Like any successful test it uncovered a few bugs. The battery system is based around the Enphase Smartswitch combined with a bank of 4x 10.2KWh battery packs that use LiFePo4 batteries combined with 53 380W LG Solar panels all using IQ7 microinverters.

The concept is that the batteries and the solar microinverters can create a standalone power grid when main power is unavailable. The house can then run on battery power, with the solar providing additional power and charging.

On the positive side, when the power dropped out, I didn’t notice. On the negative side, I didn’t notice! Normally I would have seen an alert on my phone, but I added a small distribution switch for some experiments in the upper garage a while back and didn’t notice that I put it on the non-gen-bat circuit. When power dropped off that switch went down, and the Enphase system couldn’t send out a note to say we were off grid power.

It is an easy fix to change that to a POE powered switch so it is not only on the gen power circuits, but also the APC symmetra UPS. I have also added some detection circuitry that will do some home automation tasks in that case, including an announcement over the PA.

The biggest downside of not knowing the power was out was that I continued to use power with abandon! I had a few extra larger servers powered up, plus laundry, dishwasher, and a bunch of other things kicking. I was drawing about 10KW at the time, which is a pretty typical draw for me during the day. 40KWh won’t last long at 10KW.

Once I figured it out, I did some server-shutdowns to reduce usage for things I didn’t need. The batteries ended up lasting through to the next morning before I fired up the generator.

I am going to automate some system shutdown for things that are non-essential, especially my 40+ drive backup SAN, and the math array. I also have a cell modem connection for sending notifications in the case both the power and the internet are offline.

Fun stuff for sure!

Magnetic Fields and the variation

Magnetic Fields and the variation

Here is a helpful map if you find yourself walking with a compass and needing to find true north. The lines represent the degree difference between true north and magnetic north.

Here in the PNW the difference is about 15 degrees, while in the middle of the US the difference goes to 0.

Perhaps the most amazing observation is the latitude of the southern pole, which is close to 65 degrees. An eventual geomagnetic reversal will be something to see, albeit it might take a thousand years to complete.

As long as the poles stay out of the right half plane we will be ok! Happy New Year!