Cisco ASA and Squid WCCP on Ubuntu

In order to use WCCP with Squid it must be built to support WCCP. Unfortunatly the default apt-get install squid(3) doesn’t support WCCP out of the box so it has to be BUILT FROM SOURCE

Assuming you’ve built Squid with WCCP support (using my guide or not) the following is how to get WCCP working between a Cisco ASA and Squid on Ubuntu.

There is a huge gotcha with the Cisco ASA, it only supports GRE and the Clients and Squid have to be in the same subnet…You can get around this by using multiple dynamic instances but for most of my audience I think this isn’t a problem. If I get requests for instructions on that perhaps I’ll look into it?

Here are the variables I’m working with:
LAN: 192.168.10.0/24
ASA LAN IP Address: 192.168.10.1
SQUID eth0 IP Address: 192.168.10.80

#1. Configure the ASA:
CLI:

access-list WCCP_SERVERS extended permit ip host 192.168.10.80 any 
access-list LAN_WCCP_REDIRECT extended permit tcp 192.168.10.0 255.255.255.0 any eq www
wccp web-cache redirect-list LAN_WCCP_Redirect group-list wccp_servers password *****
wccp interface LAN web-cache redirect in

Most guides will tell you that you need to deny the Squid LAN IP but that’s not true, the ASA will do it automagically.

#2. Configure Squid:
Add the following to /etc/squid/squid.conf:

http_port 3129 intercept
wccp2_router 192.168.10.1
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0 password=*****

Reconfigure squid to use the new config:

squid -k reconfigure

Now here’s an important part that almost all guide fail to mention. The ASA will pick a Router Identifier of it’s highest addressed interface once squid tries to connect to it for WCCP. You need go get that Router Identifier from the ASA:

show wccp

For our purposes let’s say that Router Identifier is 192.168.254.1

We need to create a script that will run when eth0 comes up and create the GRE interface and permits the WCCP traffic…so let’s create the following file:
vi /etc/network/if-up.d/wccp.sh

#!/bin/bash
if [ "$IFACE" == "eth0" ]; then
    modprobe ip_gre
    ip tunnel add wccp0 mode gre remote 192.168.254.1 local 192.168.10.80 dev eth0
    ifconfig wccp0 192.168.10.80 netmask 255.255.255.255 up
    echo 1 > /proc/sys/net/ipv4/ip_forward
    echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
    echo 0 > /proc/sys/net/ipv4/conf/wccp0/rp_filter
    iptables -t nat -A PREROUTING -i wccp0 -p tcp --dport 80 -j REDIRECT --to-port 3129
fi

Notice that I use the Router ID and not the LAN IP for the remote tunnel IP.

Finally, we need to tell the system to run that script with eth0 comes up. So edit the interfaces file (/etc/network/interfaces) and include the following line under iface eth0 inet:

post-up /etc/network/if-up.d/wccp.sh

You should now be able to restart networking (or just reboot the system) and it should be working.

Install Squid 3.5 from Source on Ubuntu 16.04

The following assumes a fresh install of Ubuntu 16.04.01 LTS.

First I’m going to modify the sources.list file of apt to allow downloading of sources in order to build dependencies for squid automatically.

sed -i 's/# deb-src/deb-src/g' /etc/apt/sources.list

Now I’m going to fully update the OS:

apt-get update
apt-get -y dist-upgrade
reboot

Next I would normally install the Hyper-V tools because my SQUID is going to be on a Hyper-V VM, but if you’re not using Hyper-V you can skip this:

apt-get -y install --install-recommends linux-virtual-lts-xenial linux-tools-virtual-lts-xenial linux-cloud-tools-virtual-lts-xenial

Now build the squid dependencies:

apt-get build-dep squid

At the time of this writing the latest stable version is 3.5.24 so download it:

wget http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.24.tar.gz

extract it:

tar -xzf squid-3.5.24.tar.gz
cd squid-3.5.24

Now we need to configure it and build it…in my case I wanted to add support for WCCP, which is why I needed to build from source…below is the default configuration options for squid if you just used apt-get install squid, modify it if you need/want:

./configure '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' 'BUILDCXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' '--disable-arch-native' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake,smb_lm' '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group' '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-build-info=Ubuntu linux' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security'

In my case I was perfectly happy with the Ubuntu defaults so I just added ‘–enable-wccpv2’ to that list of configuration options in order to support WCCP.

Now you can make and install it:

make
make install

You can also optionally install the pinger:

make install-pinger

Now that Squid is installed you need to set some permissions:

chown -R proxy:proxy /etc/squid /var/log/squid

Those are the minimum permissions to set, however, if you plan on using caching also include your cache folder location.

Now in order to get Squid to start at boot do this:

cp ./tools/systemd/squid.service /etc/systemd/system/
systemctl enable squid

I suggest restarting the system at this point and verify everything comes up nicely.

Automating Virtual Machine File Server Updates and Reboots

This post is mainly to share a few scripts I have written which automate the Windows Update of my Server 2016 Core File Server which hosts all of the VHDX files for my Hyper-V cluster. As you might be aware restarting the server hosting the hard drives of running VMs can be pretty painful and usually not all that easy (have to pause or shutdown all the VMs and make sure you don’t break the Hyper-V cluster).

So the basic setup is:
File Server: FS01
Hyper-V Cluster: HVC01
Workstation: Windows 10 Pro x64

The first thing we need to do is install the remote WSUS tools onto the File Server and on the Workstation. Those can be found here:
Windows Update PowerShell Module

The easiest way to install these is to simply open up PowerShell as an Administrator and type the following:
Install-Module PSWindowsUpdate

Use Remote PowerShell to install it onto the File Server or Invoke it
Invoke-Command -ComputerName FS01 -ScriptBlock { Install-Module PSWindowsUpdate }

Now we need a script to do the following:
#1. Tell FS01 to check for updates, but do not restart
#2. Wait for FS01 to finish installing updates
#3. Check if a restart is required

So let’s get to it:

Function Update-FS01
{   
    param([switch]$OptimizeOnReboot)

    Write-Output "Starting FS01 update process."
    $Script = {Import-Module PSWindowsUpdate; Get-WUInstall -AcceptAll -IgnoreReboot -IgnoreUserInput | Out-File C:TempPSWindowsUpdate.log -Force;}
    Invoke-WUInstall -ComputerName FS01 -Script $Script -Confirm:$false
    
    Write-Output "Waiting for update task to complete."
    Start-Sleep -Seconds 30

    While ((Invoke-Command -ComputerName FS01 -ScriptBlock {Get-ScheduledTask | Where-Object { $_.TaskName -eq "PSWindowsUpdate" }}).State -match "Running|4")
    {
        Write-Output "Task still running. Waiting 30 seconds..."
        Start-Sleep -Seconds 30
    }

    Write-Output "FS01 Update task completed."
    if(Get-WURebootStatus -ComputerName FS01 -Silent)
    {
        Write-Output "Reboot Required"
        if ($OptimizeOnReboot)
        {
            Reboot-FS01 -Optimize
        }
        else
        {
            Reboot-FS01
        }
    }

    & "C:Program FilesNotepad++notepad++.exe" "\FS01C`$TempPSWindowsUpdate.log"
}

For now let’s ignore the Optimize parameter, I’ll come back to it. We start by creating a script block that we will send to FS01. That script block is going to start a scheduled task on FS01 which will run immediately. The options I am using here are:
AcceptAll: Do not ask for confirmation updates. Install all available updates.
IgnoreReboot: Do not ask for reboot if it needed, but do not reboot automaticaly.
IgnoreUserInput: Finds updates that the installation or uninstallation of an update can’t prompt for user input.

We’re then using Invoke-WUInstall to FS01 with our script and telling it don’t ask for confirmation. Which again, creates a scheduled task on FS01 called “PSWindowsUpdate” that runs immediately.

Now we’re going to wait for it to finish with the while command, which checks the status of the scheduled task every 30 seconds until it is complete.

Once complete we use “Get-WURebootStatus” to determine if a restart is required from the update. If it is we’ll launch another script that reboots the server, and optionally performs an optimization of the VHDX files prior to restarting FS01.

Finally, when the reboot is complete we’ll launch notepad++ to load the results of the Winodws Update Process. Note that C:Temp should exist on the File Server, if it doesn’t you should create it or choose a different location in the script block to save to. Also if you don’t have notepad++ just change the whole “C:Program FilesNotepad++notepad++.exe” to simply “notepad”

But wait, where’s the reboot and optimization script? Here:

Function Reboot-FS01
{
    Param
    (
        [Switch]$Optimize
    )
    
    # Additional time to wait after FS01 reboots for stability and cluster health
    $FSWWait = 30
    $VMStartWait = 30
    
    Workflow Stop-RunningVirtualMachines
    {
        param($VirtualMachines)
        ForEach -Parallel($VM in $VirtualMachines)
        {
            InlineScript
            {
                Invoke-Command -ComputerName $Using:VM[1] -ScriptBlock {
                    param($VMName)
                    Stop-VM -Name $VMName | Out-Null
                } -ArgumentList $Using:VM[0]
            }
        }
    }
    
    Workflow Start-RunningVirtualMachines
    {
        param($VirtualMachines)
        ForEach -Parallel($VM in $VirtualMachines)
        {
            InlineScript
            {
                Invoke-Command -ComputerName $Using:VM[1] -ScriptBlock {
                    param($VMName)
                    Start-VM -Name $VMName | Out-Null
                } -ArgumentList $Using:VM[0]
            }
        }
    }
    
    WorkFlow Optimize-VHDs
    {
        param($VirtualMachines)
        ForEach -Parallel($VM in $VirtualMachines)
        {
            InlineScript
            {
                Invoke-Command -ComputerName $Using:VM[1] -ScriptBlock {
                    param($VMname)
                    ForEach($VHD in ((Get-VMHardDiskDrive -VMName $VMname).Path)){
                        Mount-VHD -Path $VHD -NoDriveLetter -ReadOnly
                        Optimize-VHD -Path $VHD -Mode Full
                        Dismount-VHD -Path $VHD
                    }
                } -ArgumentList $Using:VM[0]
            }
        }
    }
    
    # Getting All Virtual Machines
    $AllVirtualMachines = New-Object System.Collections.ArrayList
    Get-ClusterResource -Cluster HVC01 | Where-Object {$_.ResourceType -eq "Virtual Machine"} | ForEach-Object { $AllVirtualMachines.Add(@($_.OwnerGroup.Name,$_.OwnerNode.Name,$_.State)) | Out-Null }
    
    # Selecting Running Virtual Machines
    $RunningVirtualMachines = New-Object System.Collections.ArrayList
    $AllVirtualMachines | Where-Object { $_[2] -eq "Online" } | ForEach-Object { $RunningVirtualMachines.Add(@($_[0],$_[1])) | Out-Null }
    
    Write-Output "Stopping Running VMs"
    Stop-RunningVirtualMachines $RunningVirtualMachines
    
    if ($Optimize)
    {
        Write-Output "Optimizing VHDs of all Virtual Machines"
        Optimize-VHDs $AllVirtualMachines
        Write-Output "Finished with Optimizations"
    }
    
    Write-Output "Stopping File Share Witness"
    $FSW = Get-ClusterResource -Cluster HVC01 -Name "File Share Witness"
    $FSW | Stop-ClusterResource | Out-Null
    
    Write-Output "`nRebooting FS01`n"
    Restart-Computer -ComputerName FS01 -Force -Wait
    
    Write-Output "FS01 Reboot Complete. Waiting $FSWWait seconds to bring File Share Witness Online"
    Start-Sleep -Seconds $FSWWait
    
    Write-Output "Bringing File Share Witness Online"
    $FSW | Start-ClusterResource | Out-Null
    
    Write-Output "Waiting an additional $VMStartWait seconds to start previously running Virtual Machines"
    Start-Sleep -Seconds $VMStartWait
    
    Write-Output "Starting Previously Running VMs"
    Start-RunningVirtualMachines $RunningVirtualMachines
    
    Write-Output "`nDone"
}

Now this script is a bit more complicated because it’s using workflows to make the starting, stopping, and optimization tasks parallel (waiting for these one at a time sucks if you have more than a couple of VMs…)

So the workflows should be pretty self explanatory:
Stop-RunningVirtualMachines: Takes an array of Virtual Machines ( “VM Name”, “VM Host” ) and issues the Stop-VM command on that VM’s current host
Start-RunningVirtualMachines: Takes an array of Virtual Machines ( “VM Name”, “VM Host” ) and issues the Start-VM command on that VM’s current host
Optimize-VHDs: Takes an array of Virtual Machines ( “VM Name”, “VM Host” ) and then mounts each of the HDDs for that VM on that VM’s current host and then runs a VHDX optimization task.

If you’re asking why not just issue the commands on the cluster, well I mostly did it to spread the load. When you issue commands to a cluster (HVC01) it all goes to the cluster master.

Now to the code:

First we get all of the Virtual Machines in the cluster, then we get a list of Running Virtual Machines. Then we pass the Running Virtual Machines list into the Stop-RunningVirtualMachines process.

If the -Optimize switch has been used we’ll then optimize all the Virtual Machine hard drives. This can take a while depending on the sizes of the VHDX files and how many there are, but it will get done in parallel, so expect a lot of disk IO

Next we’ll stop the File Share Witness (I use the File Server as a File Share Witness for the cluster quorum).

Now that all VMs are off, the VHDX files have (or haven’t) been optimized and the File Share Witness is offline we simply reboot FS01 and wait for it to come back.

Once the File Server has rebooted we start the File Share Witness, pause, then we start all of the previously running VMs with Start-RunningVirtualMachines

At this point the script would return to the Update-FS01 script to open the update log.