Using AT&T GigaPower PACE 5268AC With Your Own Gateway

Here is my experience setting up our UniFi Security Gateway to work in bridge mode with the PACE 5268AC for use with AT&T’s GigaPower fiber service.

What, No Bridge Mode?

The first thing to know is that there is no such thing as bridge mode with these routers. The problem with a true bridge is that even if you put a gateway behind the PACE, you still need the ability to plug DVRs (or the wireless bridges used by wireless DVRs) into the modem and communicate with AT&T’s network to retrieve video, guide data, etc. They can’t just pass all traffic through to another device.

In a traditional setup where you just use AT&T’s router as the gateway for everything, it creates a simple NAT network (on 192.168.1.x) that your wired devices and DVRs share. But if you want to manage your own network behind the router — or in my case, disable the crappy PACE WiFi and use my own access points — their solution is to provide a pseudo-bridge mode called “DMZplus” which gives you something reasonably close, while still allowing the other ports on your router to continue to NAT out to the internet like normal. It works by leaving all of the existing stuff in place (the 192.168.1.x network, the NAT, etc.), but instead of firewalling unknown incoming connections, it passes any traffic that is not already associated with an existing session straight to the DMZplus host. This includes letting DHCP through, giving the public IP directly to the DMZplus host rather than forcing you to double-NAT.

Setting It Up

1. Change the PACE Network Range

To avoid conflicts or weird things leaking through, I went ahead and changed the network on the PACE router, since both it and the USG use the 192.168.1.x network by default. Your mileage may vary, but if nothing else it makes it easier to diagnose issues when the networks aren’t similarly numbered.

Navigate to Settings -> LAN -> DHCP on the PACE router and change the radio button from “ /” to “ /“.

If the PACE router doesn’t restart itself after changing this setting, you may want to restart the PACE router just to make sure it will hand out the new range when you hook things up.

2. Connect Your Gateway

Next, connect the WAN port on your gateway to an open port the PACE router. This will cause it to get an IP address over DHCP and show up on the PACE side.

Once you do so, it should be visible in Settings -> LAN -> Status in the “Devices” section:

Settings -> LAN -> Status

(The name will probably match whatever your router advertises itself as in its DHCP request.)

3. Make Your Gateway The DMZplus Host

Now, navigate to Settings -> Firewall -> Applications, Pinholes and DMZ. Look for your gateway in the “Select a computer” section and click on it. Once you do, it should say “You have chosen <gateway name>

Select a computer

Now that your gateway is selected, scroll down to the “Edit firewall settings for this computer” section and click the “Allow all applications (DMZplus mode)” radio button. Then click the “Save” button at the bottom.

4. A Warning About Advanced Configuration

Originally I had unchecked everything under Settings -> Firewall -> Advanced Configuration assuming I would leave it up to the PACE router to handle security.

Because of this, I spent a number of days attempting to diagnose a weird bug where certain hosts would have massive amounts of packet loss and the internet was nearly unusable. It turns out that if you uncheck “Miscellaneous” under “Attack Detection“, then any device that attempts to map a port using UPnP would cause the PACE router to create a faulty mapping that would pass un-NATted traffic directly through. This will cause havoc with some IoT devices, consoles, etc. that still use UPnP for port mapping.

In hindsight, it’s probably good to leave most of this stuff on anyways as an extra layer of protection, if you have any other devices like DVRs or wireless DVR bridges plugged directly into the router.

5. Configure Your Gateway

I’ve been going through my settings on my USG to see if there’s anything in particular I have to configure to make it work well with the PACE router, but I’m not finding anything beyond my own personal preferences as far as firewall, network, etc.

At one point I know I had configured it to always allow DHCP ports 67 and 68 through because I was seeing an issue with holding onto the DHCP lease, but it appears that’s not actually enabled and I’m not seeing any ill effects. ¯\_(ツ)_/¯

That’s It!

There really isn’t too much to it, just a few pitfalls. Seriously, though, don’t un-click “Miscellaneous.” Don’t do it!

Share on Facebook

Monkeying Around

Cruise Monkey Want Mobile

It’s almost time for the 4th annual JoCo Cruise Crazy cruise, and once again, I’ve foolishly decided to spend WAY too much of my free time on putting out an app to be used on the ship.

What’s Different This Year?

Almost everything. I started out refactoring last year’s CruiseMonkey codebase, but it was a bit creaky. It’s definitely interesting to see how far HTML5 “native” app development and PhoneGap/Cordova development have come in just a year.

After playing a little bit with AngularJS for a work project I was really impressed and wanted to refactor to Angular for this year’s CruiseMonkey. In the process of doing so, I ran into Ionic, an HTML5 framework built specifically for making mobile UIs, and reworked the frontend using that.

While there was a direct line from there to here, in the end the codebase looks nothing like CruiseMonkey 3.

Reimagining the Backend

One of the biggest problems with last year’s CruiseMonkey was the spotty wireless on the ship. Since CruiseMonkey 3 was built as a client/server app, it basically became a read-only app at the drop of a hat, whenever the network went wonky. It could cache some data when the network died, but it really didn’t handle changing data in any way. After doing some research into options, I came across CouchDB, a javascript-friendly NoSQL database, and its cousin, PouchDB. PouchDB is an implementation of CouchDB that runs in the browser, and is replication-compatible with it.

That means that I can just treat PouchDB as a local database as if my app was a standalone mobile app, and all I need to synchronize events with other CruiseMonkey users is to replicate back to the database whenever the network is working. The proof will be in whether it works once we’re on the ship (natch) but hopefully it will be stable.

Twit-Arr Integration

Of course, I had a grand vision of writing a complete twit-arr client this year as a part of CruiseMonkey, but time got the best of me. Kvort the Duck already undertook writing an entirely new twit-arr server and web client and it’s turned out awesome in just a short time. Hopefully this means next year we have a good base to build on and integrate more closely. This year I was able to at least integrate giving you a notice if you have new Seamail (private messages on the Twit-Arr server), as well as a fun browser for viewing all of the pictures people post to twit-arr. Next year I want to be able to read and post messages, pictures, and Seamail right from the app.

One Last Look

4.0.0 Waiting For Review

Anyways, I’ve submitted version 4.0.0 of CruiseMonkey to the iTunes App Store. Hopefully things go smoothly. Currently looks like the average review time is about 6 days at the moment, which hopefully gives a chance for one more update before the cruise for any bugs people might find. I’ll keep doing beta testing all the way up to the cruise, most likely, but I’m looking forward to a couple weeks of not coding for 4 hours every evening after coming home from my day job coding. 😀

Share on Facebook

When Idealism Meets The Real World: Google Reader Was The Last Straw

There was a time when Google was the shining beacon of geekdom; when tales of their crazy interview process, fancy chefs, and 20% time were spoken of in reverent whispers.

I’m realizing now that I held onto that fantasy for a lot longer than was realistic.

While I love my freakishly good job at OpenNMS (work from home lots, open-source software, good people), Google is the one place I’d always thought I’d at least entertain if the right thing came along.

Last week, I got an email from a Google recruiter (I get one every year or so, just checking in). I told him the usual, that I wasn’t looking to move, but am always interested to hear about opportunities from Google. He responded back a few days ago, asking when we could talk.

Then they announced Google Reader was going away.

When I realized I was losing something that I spend at least 60% of my web browsing time in, I finally consciously reevaluated my feelings on Google. And then, I responded to the recruiter:

Hey, sorry it’s taken a bit long to get back to you, been a busy week.

I have to say, this week’s news about Google Reader getting killed has solidified a growing wariness that’s been building up in me for the last few years.

In the past, Google was the one company I’d consider dropping everything for if the right opportunity came along. Now it seems like all the things you did that were great — that pulled in the alpha geeks that everyone followed — are going by the wayside. Reader, Wave, Code Search, these are all things that I used regularly which went away.

They weren’t all instant successes (I’m looking at you, Wave), but Google had great technology that they have often failed to capitalize on, instead moving on to the next thing.

I truly am happy where I’m at, and I honestly don’t know that Google is the kind of company I would want to work for anymore.

– Benjamin Reed

Share on Facebook

Schrödinger’s Bugs

Working on an open-source project teaches you a few things about dealing with software developers, and reporting bugs. I’ve been in the open-source world for a long time, and I remember when I first started out as a user of software, I felt glad to even have access to these tools at all, and I felt a reluctance to “bother” the developers with issues if I wasn’t sure it was only me.

The problem is, issues are a bit like Schrödinger’s cat: they don’t exist until the developer knows about them.

Since I’ve become a developer of open-source software and seen things from the other side, I have one request: err on the side of opening an issue. There’s nothing I love more than having an issue opened, and being able to fix it, and tell the user their problem is solved. It’s that kind of feedback loop that is one of the best parts of developing software without a marketing and sales department sitting between you and your users.

So without further ado, I’d like to point out a few comments in this vein. Note that when I say “issue,” it could be anything: a showstopper bug, an annoyance, or just a new feature you wish the software had.

Always Open an Issue
Sure, sometimes it’s a pain to figure out where the issue reporter link is, and create an account, and validate your email, and figure out what component it goes into… but don’t worry about it. If you get it in the wrong place, they’ll know where it belongs and (hopefully) triage it. But if you don’t open that issue, they may never know it’s a problem.
Don’t Worry If It’s a Duplicate
Of course, you should always try searching for your issue first, maybe someone else has reported it. Make a comment if someone has. But if you can’t find it, don’t worry that it might be a duplicate, go ahead and open that issue. As a developer, I’d rather close a million duplicates than to never know about the issue in the first place.
Don’t Just Describe the Issue, Describe What You’re Trying to Do
It may be that the issue you’re trying to solve is meant to work a different way, or is part of another feature you haven’t used yet, or has a workaround. Make sure when you describe the problem you’re having, also describe what you want to accomplish.
A Closed Issue is Not an Ultimatum
This is a corollary to “describe what you’re trying to do.” Just because an issue is closed does not mean it is closed for discussion. Sometimes the developer doesn’t realize what you’re trying to actually do, or the original issue was described in a way that doesn’t make it clear that the real issue is elsewhere.

For example, OpenNMS supports creating a “path outage,” which describes how particular nodes are related. There was an issue opened that said if you created a path outage, it would be wiped out when using Provisiond. It was closed, saying that you create path outage relationships with the “parent-id” tag in the provisioning group file. What the issue did not say is that these manually-created path outages were created through the UI. So the real issue is that the web UI path outage editor is not Provisiond-aware, and the issue should be reopened.

It’s Better to Be Too Verbose than Not Enough
Configuration files, logs, output from `dmesg` or similar, anything you can add that makes it easier to diagnose the problem. It’s a lot harder to fix a problem with a one-line error message than with 200 lines of context telling you what the software was doing just before the error. The more information you give, the more likely it is the developer will be able to figure out what’s going on when the issue happened.

Does this mean your issue will be resolved quickly? Not necessarily. Everyone has their own set of priorities, and their own time set aside for working on issues. I can say that a good issue report, with a lot of detail and a good description of what you’re trying to accomplish, will get a lot more traction than a 1-line report saying “it doesn’t work,” and it will get a heck of a lot more traction than no report at all. To paraphrase Wayne Gretzky, you miss fixing 100% of the issues you never report. 😉

Share on Facebook

Be Careful What You Match For, You Might Not Get It

So I ran into a really interesting issue in Java regular expression parsing while trying to work on an issue for a customer.

OpenNMS has the ability to listen for syslog messages, and turn them into OpenNMS events. To configure it, you specify a mapping of substring or regular expressions to UEIs (OpenNMS’s internal event identifiers).

The customer saw a huge drop in performance from 1.8.0 to 1.8.1. Basically the only change to the syslog daemon was a change to use Matcher.find() instead of Matcher.matches(). The problem was that they were making regular expressions like this:

foo0: .*load test (\\S+) on ((pts\\/\\d+)|(tty\\d+))

…which weren’t matching. So they changed it to put .* at the front, so matches() would get it:

.*foo0: .*load test (\\S+) on ((pts\\/\\d+)|(tty\\d+))

Upon upgrading to 1.8.1, they saw orders of magnitude slowdown. The reason is that when you haven’t specified an anchor, find has to figure out the “right” starting point for the match. In doing so, it spins a LOT, compared to matches() and its implicit anchors. It’s very expensive to scan all the way through the string, attempting to re-apply the regex, if it turns out there is no match. We figured this out this morning after I put together some benchmarks to show the differences:

regex = \s(19|20)\d\d([-/.])(0[1-9]|1[012])\2(0[1-9]|[12][0-9]|3[01])(\s+)(\S+)(\s)(\S.+)
input = <6>main: 2010-08-19 localhost foo23: load test 23 on tty1

matches = false: total time: 167, number per second: 5988023.9521
find = true: total time: 1264, number per second: 791139.2405
matches (.* at beginning and end) = true: total time: 2598, number per second: 384911.4704
find (.* at beginning and end) = true: total time: 2572, number per second: 388802.4883
matches (^.* at beginning, .*$ at end) = true: total time: 2918, number per second: 342700.4798
find (^.* at beginning, .*$ at end) = true: total time: 2648, number per second: 377643.5045

regex = \s(19|20)\d\d([-/.])(0[1-9]|1[012])\2(0[1-9]|[12][0-9]|3[01])(\s+)(\S+)(\s)(\S.+)
input = <6>main: 2010-08-01 localhost foo23: load test 23 on tty1

matches = false: total time: 128, number per second: 7812500.0000
find = true: total time: 1199, number per second: 834028.3570
matches (.* at beginning and end) = true: total time: 2570, number per second: 389105.0584
find (.* at beginning and end) = true: total time: 2554, number per second: 391542.6782
matches (^.* at beginning, .*$ at end) = true: total time: 2630, number per second: 380228.1369
find (^.* at beginning, .*$ at end) = true: total time: 2595, number per second: 385356.4547

regex = foo0: .*load test (\S+) on ((pts\/\d+)|(tty\d+))
input = <6>main: 2010-08-19 localhost foo23: load test 23 on tty1

matches = false: total time: 87, number per second: 11494252.8736
find = false: total time: 193, number per second: 5181347.1503
matches (.* at beginning and end) = false: total time: 1242, number per second: 805152.9791
find (.* at beginning and end) = false: total time: 28631, number per second: 34927.1768
matches (^.* at beginning, .*$ at end) = false: total time: 1241, number per second: 805801.7728
find (^.* at beginning, .*$ at end) = false: total time: 1242, number per second: 805152.9791

regex = foo23: .*load test (\S+) on ((pts\/\d+)|(tty\d+))
input = <6>main: 2010-08-19 localhost foo23: load test 23 on tty1

matches = false: total time: 85, number per second: 11764705.8824
find = true: total time: 873, number per second: 1145475.3723
matches (.* at beginning and end) = true: total time: 1812, number per second: 551876.3797
find (.* at beginning and end) = true: total time: 1879, number per second: 532197.9776
matches (^.* at beginning, .*$ at end) = true: total time: 1874, number per second: 533617.9296
find (^.* at beginning, .*$ at end) = true: total time: 1865, number per second: 536193.0295

regex = 1997
input = <6>main: 2010-08-19 localhost foo23: load test 23 on tty1

matches = false: total time: 80, number per second: 12500000.0000
find = false: total time: 215, number per second: 4651162.7907
matches (.* at beginning and end) = false: total time: 1339, number per second: 746825.9895
find (.* at beginning and end) = false: total time: 37722, number per second: 26509.7291
matches (^.* at beginning, .*$ at end) = false: total time: 1350, number per second: 740740.7407
find (^.* at beginning, .*$ at end) = false: total time: 1351, number per second: 740192.4500

The moral of the story is, if you’re using Matcher.find(), use no anchors and no .*, but in all cases, you’ll get the most deterministic behavior from always anchoring your regular expressions properly.

Share on Facebook

New Blog

As if I don’t have enough blogs….  😉

But I wanted to write about non-techie things, and I kept putting it off, because it felt kind of weird posting them to a blog that is obviously mostly about my tech adventures.  So, I’ve set up a new blog…

If you feel like following it, go for it, if not, don’t. 🙂

Also, I’ve gone ahead and completely reworked my blog, and *cough* replaced it with WordPress, something I thought I’d never do. While WP has a somewhat sordid history and does require the upgrade train more often, it is easier to keep up-to-date, and appears to have a better track record more recently. I’d let the old blog software stagnate and found myself resisting messing with it more and more.

Let me know if you run into any issues. I think some old links will be busted, but google sitemap should pick up the new stuff pretty quickly, I hope.

Share on Facebook

Creating an iMix with Music from the iTunes Store

Sorry I’ve been a bit quiet lately, things have been crazy with work and I’ve only sporadically had time to update Fink stuff (incidentally, if you’re using any of my perl module packages, I updated about 100 of them this week.) I’ll be at WWDC next week if anyone wants to get together.

Anyways, as I’ve blogged about before, one of my hobbies is writing music, and I’ve been using TuneCore for all of my digital distribution to the iTunes Music Store, Amazon, etc. TuneCore has an awesome discussion list for artists using their service called the “TuneCouncil” that ranges from hobbyists like me up to producers and folks representing large and numerous big-name acts. It’s an amazing chance to level the playing field and have a real conversation between artists and others trying to find their way through the new music economy.

Recently, the subject of iMixes came up. An iMix is essentially a playlist or mix tape that you can upload to iTunes. The iMix will show up in the iTunes Store when you view the songs associated with that iMix, and people can rate them, etc. It’s a good way to find new music, based on things you already know you like. For an artist, it’s a great marketing tool, you can make playlists of music that complements your own, and get the word out. TuneCore has a tutorial on creating an iMix on their Marketing & Promotion page, but one thing it doesn’t mention is that as of iTunes 9.0, Apple has changed the interface and you can no longer put tracks you don’t currently have in your iTunes library into an iMix. Previously, you could drag songs directly from the iTunes store listing into a playlist, whether they were a part of your collection or not.

Thankfully, this is still a possibility if you downgrade iTunes to 8.2.1.

Removing iTunes

First, you’ll have to remove your existing copy of iTunes. Be careful deleting files. There is no warranty for my blog! If it breaks in half, you get to keep both halves! Also note, if you downgrade iTunes, you will have to delete or rename your existing iTunes directory (Home -> Music -> iTunes on Mac).


On Windows, you should be able to uninstall iTunes through the control panel.


On Mac OS X, drag iTunes from your Applications folder to the trash, and then drag the “iTunes” and/or “iTunesX” packages from the Library -> Receipts folder to the trash:

Install iTunes 8.2.1

8.2.1 was the last version that had an interface which allowed dragging tracks from the iTunes Music Store interface. You can download them here:

Create a Playlist

Now that you’ve got an old version of iTunes installed, you should be able to create a playlist (File -> New Playlist) and then go to the iTunes Store and search for your songs to add. You should be able to drag from the list on the right into your playlist:

Select your playlist on the left side, and you should see the little circle with an arrow appear next to the name. Click that, and you should have the option of creating an iMix:

That’s It!

For details and other useful marketing ideas, check the TuneCore marketing and promotion page, and the TuneCore blog, they’ve got lots of great pointers to other resources.

Share on Facebook

For A Good Cause, Shave Here

For the first time, I am participating in a St. Baldrick’s event for cancer research. It’s a great cause; I have family members and friends who are either battling with cancer, or are themselves survivors.

My goal is to reach $1000 in donations towards cancer research through the St. Baldrick’s Foundation. If there’s anything you can do to help, I would very much appreciate it, and there are many others out there who can benefit from your help.

Donate Here!

Share on Facebook

KDE4 Progress

I’ve been making good progress on getting KDE 4.4 (release candidates) working. It’s been quite an interesting ride, in both a good and bad way. =)

First, there’s the fun of 10.6 making it even harder to have code that forks without it accidentally exploding on the CoreFoundation fork-without-exec prohibition. I was able to solve this with a combination of fixes from macports’ kdelibs4, and some of my own code which changes things to use low-level POSIX APIs instead of Qt APIs for some bounds-checking before execution.

Next, there’s the fun of Phonon. KDE 4.4 requires a newer version of Phonon than what ships with Qt (even Qt 4.6). On OSX it gets even hinkier, since the QuickTime plugin for Phonon requires private Qt headers, so the only sane way to build it is to build the Phonon included with Qt, rather than building it as a separate project.

I ended up adapting a patch the Kubuntu folks use to inject a modern Phonon into Qt 4.6. In the process, I finally got around to learning my way around Git (and gitorious), and have set up my own Qt branch which includes my (binary incompatible outside of Fink) patch to Qt to fix plugin-building, Phonon from kdesupport, the kde-qt (formerly qt-copy) changes, and my patches to Qt that splits OSX into two platforms, Q_OS_DARWIN (i.e. use raw UNIX APIs, no Core*), and Q_OS_MAC (standard Qt/Mac).

Long story short, I’m getting there. I’ve gotten about half of KDE 4.4 RC1 built and apparently running reasonably. RC2 was just released to packagers, and I’m testing out my move to Qt 4.6.1 from 4.6.0, but once I get everything test-built on 10.6, I’ll go validate everything on 10.4 and 10.5 (including making some DBus fixes for 10.4).

After that, the next thing to tackle is Mono, and then eventually I’ll see if I can get KDE3 building/working on 10.6.

Share on Facebook

Fink and 10.6

It’s been a crazy couple of weeks, with Snow Leopard out, people are scrambling to fix packages that haven’t been already. I was a slacker in running the seeds this time around, and haven’t really had much chance to give my packages a serious look until recently, but FYI, I am working on getting everything building everywhere I can.

Some notes on popular stuff:

  • KDE3: There were a number of annoying things blocking KDE3, but with the approval of some of the other maintainers, I’ve got a lot of the deps that were failing fixed up, and I’m working my way through a full KDE build and hope to have everything hunky-dory in unstable in the next few days.
  • KDE4: First of all: there will not be KDE4 on x86_64 in the near future. Qt4/Mac 64-bit does not have the Qt3Support framework, which plenty of KDE4 bits still depend on. I’ll definitely be making sure that KDE4 builds fine in 32-bit mode, and in 64-bit X11 though, and after that, well, we’ll see how much work it is to excise Qt3Support from at least the base libraries. In the process, I’m going to try to update it to KDE 4.3.1.
  • Java packages: When I packaged a lot of Java stuff for 10.4 and 10.5, I tried to build them targeting the 1.4 JDK, so it was more likely that built jars would work for most people. Unfortunately, Snow Leopard removes the 1.4 JDK, so I’m updating everything to build with the 1.5 JDK. Most stuff is handled, I’ll be fixing up other stuff as I run into them.

If you have packages that you use day-to-day, let me know, I’ll try to get to them first. I’ve been fixing things up on a first-come, first-serve basis based on reports to my maintainer email address(es).

I’ll post here on my blog if I hit any other major milestones. In the meantime, happy Finking. 🙂

Share on Facebook