Tags: , ,
Posted by: peter

My Canary Smart Home Security device arrived yesterday and now that I’ve had 24 hours a play with it I thought I’d write up my impressions.

It arrived in a large box which had the smaller Canary box in it as well as a really cheap North American to Australian power adapter rattling around loose in the box. Given that I’d pre-paid 200 bucks and waited just over 18 months for this device to arrive seeing a 50 cent adapter (Which I’d never actually use due to the fire risk) as the first thing when you open the packaging was a bit jarring.

Having said that, the actual Canary box has a premium feel and was pleasing to open with the Canary itself sitting nicely visible once you remove the lid along with a couple of high quality cables. As expected the power adaptor was an American blade style but as the Canary itself uses a micro USB socket for power I just plugged it into a smart phone charger that I already owned.

The device setup procedure is via the Canary smartphone app which then transfers the configuration via a standard audio cable plugged between the smartphone and the Canary which is a nice touch.

This was where I hit my first snag. The Canary simply refused to connect to my Wireless LAN, so I had to arrange an ethernet cable for it before it would connect up to the internet. At this point it automatically downloaded an update (all the while keeping me updated via the smartphone app)

After the device rebooted it came up straight aware and started working. As part of the setup process you are asked to pick your location from a Google map so that Canary can set up a geofence around your home which it uses to sense when you leave home in order to automatically arm the security features of the device. Unfortunately this feature works particularly poorly and decides that I’m entering and leaving my home multiple times per hour even though I’ve been sitting on the sofa the entire time. It continued to do this several times throughout the night with status changes between 4 and 5 am when I was most definitely asleep and my phone wasn’t moving anywhere.

The app let me invite my wife to access the Canary device also which works as expected giving her full access once she created an account for herself. Unfortunately she is also experiencing the same geofencing issue that I experience which means that the device constantly thinks that one of us is away.

The environmental sensors are a particularly nice touch (and one of the main reasons why I backed this project rather than buying a readily available D-Link device 18 months earlier) however despite asking me my location in order to setup the geofence, the software wasn’t smart enough to figure out that like most of the world I have no idea what a Fahrenheit is. After poking around in the application I found the switch to convert to the Celcius temperature scale and the graph started to make sense. (The smart thing to do would be to have the app default to Celsius instead of Fahrenheit and select Fahrenheit automatically if you geofence a location in North America)

So far in the 24 hours that I’ve been using the device it’s spent about 6 hour “offline” which I assume means that something is wrong with the Canary server infrastructure. If it continues like this it wont be very useful as a security device..

Tags: , ,
Posted by: peter

While I have been using Wireshark for many years (since back when it was still called ethereal) I only just discovered tshark which is the command line version. It’s a more modern replacement for tcpdump which has some very nice capabilities that make it worth learning for terminal based packet analysis, as well as a minimalist version called dumpcap which can be used for packet capture only.

In my case I want to capture traffic on a fairly busy gigabit interface in order to inspect a brief event that only happens randomly once every couple of days. I do have a separate SMS alarm that triggers when the event happens thanks to OpenNMS, but by the time I have received the alarm, logged into my packet capture machine and kicked off tcpdump it’s usually too late to capture anything useful. This is where the following usage of dumpcap’s inbuilt ring buffer mode comes in handy:

    dumpcap -n -a filesize:102400 -b files:4500 -w /tmp/capture/problem.pcap

This command when run in the background using “screen” will continuously capture data from the network, storing it in 4500 automatically rotated, time stamped files of 100MB each. This means that I always have the last 450GB of network traffic available to analyse without ever filling up the 500GB disk in my capture machine which should allow me to solve the problem next time it occurs!

Posted by: peter

I recently had to setup some openSUSE Linux boxes which will be used to capture add-hoc network traffic for debugging purposes. As there will be multiple users with the need to do this, I wanted to allow the use of tcpdump by non-root users. This is fairly straight forward to accomplish using file system capabilities, but as it’s not clearly documented anywhere else here is what I came up with:

  1. First install tcpdump and libcap-progs:

    zypper install tcpdump libcap-progs
    
  2. Then create a dedicated group called pcap for users who should be able to run tcpdump and add your user to it:

    groupadd pcap
    usermod -a -G pcap peter
    
  3. Modify the group ownership and permissions of the tcpdump binary so that only users in the pcap group can run it:

    chgrp pcap /usr/sbin/tcpdump
    chmod 750 /usr/sbin/tcpdump
    
  4. Set the CAP_NET_RAW and CAP_NET_ADMIN capabilities on the tcpdump binary to allow it to run without root access (These options allow raw packet captures and network interface manipulation):

    setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump
    
  5. Optionally, check that the permissions are correct:

    # ls -l /usr/sbin/tcpdump
    -rwxr-x--- 1 root pcap 770776 Feb 19  2011 /usr/sbin/tcpdump
    
    # getcap /usr/sbin/tcpdump
    /usr/sbin/tcpdump = cap_net_admin,cap_net_raw+eip
    
  6. Optionally, symlink the tcpdump binary to a directory that is in the path for a normal user:

    ln -s /usr/sbin/tcpdump /usr/local/bin/tcpdump
    
  7. Optionally, configure the SuSEconfig permissions module so that it wont reset the file permissions next time you run it by adding the following to the bottom of /etc/permissions.local

    /usr/sbin/tcpdump             root:pcap       0750
     +capabilities cap_net_admin,cap_net_raw+eip
    
  8. Inform that Linux kernel that it should enable file system capabilities at boot by adding the following option to the kernel line in /boot/grub/menu.lst:

    file_caps=1
    
  9. Reboot to enable file system capabilities

Tags: ,
Posted by: peter

Following on from APNIC’s (Asia Pacific NIC) earlier assessment that they would need to request the last available /8 blocks, they have now been allocated 39/8 and 106/8, triggering IANAs (Internet Assigned Numbers Authority) final distribution of blocks to the RIRs (Regional Internet Registries).

APNIC which is the fastest growing Internet region is expected to be the first regional NIC to run out of IP address space within 3 to 6 months time.

Tags:
Posted by: peter

Over the last few months I have been trying to improve the management of our collection of Polycom Soundpoint IP telephones. They were initially configured by some friendly consultants with static IPs and no registration to our SIP proxy which meant that the media servers were configured to route to the IPs of phones instead of the extension. (I KNOW, what on earth were they thinking?) The stupidity of this configuration became even more indefensible when it became clear that the same company had sold us both the SIP proxy, and a TFTP based telephone provisioning server!

There seems to be a fair bit of confusion and misinformation on the net about how to:

  • Tell a Polycom phone to use a dedicated Voice VLAN that is separate from an untagged PC VLAN completely automatically
  • Tell a Polycom phone to provision it’s (SIP) configuration data from TFTP (or HTTP) completely automatically

The good news is that both of these are trivially done with simple modifications to your DHCP server’s configuration.

I am going to assume that you have an ethernet network with VLAN ID 2 used for PC data and VLAN ID 3 used for VoIP (Quite likely you are reserving VLAN ID 1 for management, but it’s not important here)

To get a phone to pass through the “PCVLAN to it’s second ethernet port while it itself uses a separate “Voice” VLAN, you need to do the following:

  • Configure the ethernet switch port connected to the Polycom phone as a “hybrid” trunk with VLAN ID 2 untagged and VLAN ID 3 tagged
  • Configure the DHCP server running on VLAN 2 to serve DHCP option 128 as a “String” with the contents “VLAN-A=3;”
  • (Re)boot the phone… (If you have already been manually configuring things you may want to do a factory reset of the phone to make sure you haven’t broken something)
  • At this point the phone should boot, receive an IP on VLAN 2, see that there is DHCP option 128 telling it to use VLAN 3, switch automatically to VLAN 3, and send out another DHCP request on that VLAN (You of course need to have a DHCP server setup on VLAN 3 also or the phone will fail here…)

This has solved our VLAN configuration option, but what about the rest of the SIP config? For that we need to tell the phone where to find our TFTP provisioning server:

  • Configure the DHCP server on VLAN 3 (The Voice VLAN) to serve option 66 as a “string” with the contents “tftp://my.tftp.server.address/” (Set this to the IP or DNS of your TFTP server)

The setup, placement and contents of the Polycom configuration files on the TFTP server are left as an exercise for the reader as there are plenty of examples of how to do that.

Tags: ,
Posted by: peter

Kurt Grandis carried out an awesome Django vs .NET experiment at his company:

Almost two years ago I was in a rather unlikely situation in that I was running a software engineering department containing both a C# team and a Python team…It slowly dawned on me that I had a perfect test bed. Here we had two teams using different technology stacks within the same department…they shared the same development processes, project management tools, quality control measures, defect management processes. Everything was the same between these groups except for the technologies. Perfect! So like any good manager I turned my teams into unwitting guinea pigs.
With the result:
We found the average productivity of a single Django developer to be equivalent to the output generated by two C# ASP.NET developers. Given equal-sized teams, Django allowed our developers to be twice as productive as our ASP.NET team.