Sunday, April 24, 2022

Dynamic SSH jump hosts


Introduction

As a road warrior we all have likely created a jump host configuration which allows us to automatically connect to a machine behind a jump host.  In the past I used a convention like the following to connect to a host on my home network while I was away:

ssh hostA.mynet.red-tux

This would ssh to hostA on my home network via the externally accessable jump box.  However this begins to have trouble if you have a git server on your internal network.  So I did some digging and found some people talking about the ability to create a dynamic jump configuration, but not too much detail.  This is what I came up with.  

SSH Configuration file

First we need to edit our local client side ssh configuration file.  Below is an example of what I use.  This configuration will work for all hosts under the "mynet.red-tux.net" domain, and that entry is the entry which does the magic as it calls a script which determines if ssh should go directly to the host or via a jump host.

~/home/.ssh/config

compression yes
tcpkeepalive yes
serveraliveinterval 15
serveralivecountmax 6
ForwardAgent yes

host bungee.red-tux.net
 ForwardAgent yes
 ControlPath ~/.ssh/control-%r@%h:%p
 ControlMaster auto
 ControlPersist 1

host *.lab.mynet.red-tux.net
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null

host *.mynet.red-tux.net
 ProxyCommand ~/bin/proxy_ssh.sh %h %p   

Proxy Script 

In my case this script sits in my home directory inside bin with a name of "proxy_ssh.sh".  This script receives two parameters from ssh, the host you are attempting to connect to and the port you are attempting to connect to on that host.  First the whole file, then I'll break down the what it does.  Essentially this script looks for the internal IP address of the jump host because I have an internal DNS server I manage for my hosts (running on Red Hat IdM/FreeIPA).  If it determines the IP address is the internal one it connects directly and sets up a netcat proxy for the ProxyCommand to work correctly, otherwise it will connect to the jump host and then setup the needed netcat proxy.

~/bin/proxy_ssh.sh

#!/bin/bash

#Time to cache lookup in seconds
CACHE_LIMIT=900

#Cache file to use
CACHE_FILE=/home/nelsonab/.ssh/network_lookup

#Jump host
JUMP_HOST=jump.example.com
JUMP_IP=192.168.55.5
JUMP_LOOKUP="$(host $JUMP_HOST | awk '{print $4}')"


if [[ "$1" =~ .*red-tux\.net$ ]]; then
# red-tux.net found in hostname  
 CURRENT_TIME=$(date +%s)

 CACHE_UPDATE_NEEDED=/bin/false

 if [[ -f $CACHE_FILE ]]; then
   echo "Cache found"
   # # Read Cache
   # . $CACHE_FILE
   #Get age of cache
   CACHE_AGE=$(stat -c %Y $CACHE_FILE)

   if (( $(( $CURRENT_TIME - $CACHE_AGE )) > $CACHE_LIMIT )); then
   #|| "$JUMP_LOOKUP" != "$SSH_LOCATION_IP" ]]; then
     #The age of the cache is greater than the limit or , update needed
     CACHE_UPDATE_NEEDED=/bin/true
   fi
 else
   #Cache could not be found, update needed
   CACHE_UPDATE_NEEDED=/bin/true
 fi

 if $CACHE_UPDATE_NEEDED; then
   JUMP_HOST_IP=$(host $JUMP_HOST | awk '{print $4}')
   if [[ "$JUMP_HOST_IP" == "$JUMP_IP" ]]; then
     echo "SSH_LOCATION=internal" > $CACHE_FILE
   else
     echo "SSH_LOCATION=external" > $CACHE_FILE
   fi
   echo "SSH_LOCATION_IP='$JUMP_LOOKUP'" >> $CACHE_FILE
   logger "PROXY_SSH Cache file updated"
 fi
 . $CACHE_FILE

 if [[ "$SSH_LOCATION" == "internal" ]]; then
   exec nc $1 $2
 else
   exec ssh bungee.red-tux.net nc $1 $2
 fi
else
# red-tux.net not found in hostname
 exec nc $1 $2
fi

Breakdown of proxy_ssh script

First up we set a variable for how long a lookup cache is good for.  This script will perform a DNS lookup, hence the desire to cache the lookup.  The chances of me roaming between inside and outside my network within the timeout period is overall low.

#Time to cache lookup in seconds

CACHE_LIMIT=900

#Cache file to use
CACHE_FILE=/home/nelsonab/.ssh/network_lookup

Next we setup some information about the Jump host, and how to look it up.  It is likely you can change the lookup done for the "JUMP_LOOKUP" variable and not need to change the rest of the script, but I have not tested this. 

#Jump host
JUMP_HOST=jump.example.com
JUMP_IP=192.168.55.5
JUMP_LOOKUP="$(host $JUMP_HOST | awk '{print $4}')"

Then we do a regex on the host name passed in, if it matches our dynamic domain, then we proceed.  This is more of a fallback to ensure we're only jumping for hosts in the given domain, it is likely this isn't needed as the host configuration performs this selection. 

if [[ "$1" =~ .*red-tux\.net$ ]]; then
# red-tux.net found in hostname  
 CURRENT_TIME=$(date +%s)

We set a variable assuming that the cache does not need to be updated, and then perform actions based on this variable later, after numerous checks have been performed.  It is likely this could be greatly simplified, but I was wanting to have a fallback to invalidate the cache if the IP address of of the host changed during the timeout period, but never got this finished.

 CACHE_UPDATE_NEEDED=/bin/false

 if [[ -f $CACHE_FILE ]]; then
   echo "Cache found"
   # # Read Cache
   # . $CACHE_FILE
   #Get age of cache
   CACHE_AGE=$(stat -c %Y $CACHE_FILE)

   if (( $(( $CURRENT_TIME - $CACHE_AGE )) > $CACHE_LIMIT )); then
   #|| "$JUMP_LOOKUP" != "$SSH_LOCATION_IP" ]]; then
     #The age of the cache is greater than the limit or , update needed
     CACHE_UPDATE_NEEDED=/bin/true
   fi
 else
   #Cache could not be found, update needed
   CACHE_UPDATE_NEEDED=/bin/true
 fi

Next we perform the lookup and set the information in the cache file.  I haven't fully included the JUMP_LOOKUP logic, I guess that's a #TODO. 

 if $CACHE_UPDATE_NEEDED; then
   JUMP_HOST_IP=$(host $JUMP_HOST | awk '{print $4}')
   if [[ "$JUMP_HOST_IP" == "$JUMP_IP" ]]; then
     echo "SSH_LOCATION=internal" > $CACHE_FILE
   else
     echo "SSH_LOCATION=external" > $CACHE_FILE
   fi
   echo "SSH_LOCATION_IP='$JUMP_LOOKUP'" >> $CACHE_FILE
   logger "PROXY_SSH Cache file updated"
 fi

Next we source the cache file since it sets the variable "SSH_LOCATION".  If the cache did not need an update all the steps to this point execute quickly. 

 . $CACHE_FILE

If we're internal then we go directly to the host 

 if [[ "$SSH_LOCATION" == "internal" ]]; then
   exec nc $1 $2

For external we first go to the jump host 

 else
   exec ssh bungee.red-tux.net nc $1 $2
 fi

Alternatively we go directly to the host as a fallback. 

else
# red-tux.net not found in hostname
 exec nc $1 $2
fi

Conclusion

As you can see overall the process is straight forward.  Hopefully this helps you create a dynamic ssh jump configuration of your own. 

Thursday, April 11, 2019

Leveraging the Embedded ACPI windows product key for Windows VMs using KVM

I recently purchased a Lenovo P52 and of course I deleted Windows and installed Fedora 29.  After a few initial bumps, things have been working well!

However I decided it would be beneficial to have a Windows VM and wanted to leverage the product key which came with my laptop.  This is how I was able to accomplish this:

  • Make a copy of your MSDM and SLIC ACPI tables.
The MSDM and SLIC tables contain software license keys, which are leveraged by Windows.  I placed these files in my /var/lib/libvirt directory and set the ownership of the MSDM file to qemu.

cd /var/lib/libvirt
cp /sys/firmware/acpi/tables/SLIC .
cp /sys/firmware/scpi/tables/MSDM .
chown qemu MSDM
  • Edit your VMs XML configuration file.
Next open up the xml configuration file associated with your VM.  If you have not created your VM at this time, perform that task.  If you're using libvirt Manager just click on "begin install" then power off the vm on the first screen.    In my case the VM is called "Windows10OEM."

virsh edit Windows10OEM

At the top of the file you will need to change the  name space used as some of the options are not in the default namespace.
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

Next in the "os" section add the SLIC table
<os>
<type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
<acpi>
   <table type='slic'>/var/lib/libvirt/SLIC</table>
</acpi>
   <bootmenu enable='no'/>
</os>

Then add a section for qemu command line options.  This can go anywhere in the file, so long as it is under the main "domain" section only.

<qemu:commandline>
   <qemu:arg value='-acpitable'/>
   <qemu:arg value='file=/var/lib/libvirt/MSDM'/>
</qemu:commandline>

Close the file.

  • Start and enjoy!
I recommend starting the VM using the command line.  This way if you have a permissions issue to one of the ACPI tables you will be able to see the message.
virsh start Windows10OEM

If you started an install earlier and stopped it, you may need to go to the boot settings of the VM and enable the CDROM device as a bootable device.  In addition you may need to verify that the CDROM device points to the proper location for your boot ISO.

At this point you can install Windows using the ISO installer, and if all works correctly you should never be prompted for a product key.

Wednesday, March 28, 2018

Zabbix and Performance Co-Pilot (zbxpcp)

I've been wanting to test out the integration of Zabbix and Performance Co-Pilot (PCP).  So I decided to give it a whirl and I have to say the process of configuring the Zabbix agent to talk to PCP via the zbxpcp library is very straight forward.

The system I used is RHEL 7 with the Zabbix RPMS installed from the Zabbix SIA repository.

First install the PCP software and enable it:

# yum install pcp
# systemctl enable pmcd
# systemctl start pmcd
 Next install the interface Zabbix agent library, this RPM is found in the RHEL 7 Server Optional channel:

# yum install pcp-export-zabbix-agent

By default the Zabbix agent looks in a different directory for loadable modules, so let's create a symlink to fix:

# ln -s /usr/lib64/zabbix/agent/zbxpcp.so /usr/lib64/zabbix/modules/zbxpcp.so
Next create a configuration file for the module.  The following is the contents of the file found at this path; /etc/zabbix/zabbix_agent.d/zbxpcp.conf

LoadModule=zbxpcp.so
Next test the agent, and if successful, restart.

# zabbix_agentd -t pcp.kernel.all.sysfork
pcp.kernel.all.sysfork                        [u|17068591]

Overall this was one of the more straight forward setups I've done in a while, and it all worked out of the box on RHEL!

Wednesday, March 2, 2016

Pipe Viewer, a very useful tool

I recently came across the program PV aka Pipe Viewer, which I found very useful in figuring out how much of a file had been read during a large database insert. Here is the use case:
[root@zabbix dumps]# pv zabbix_data-history.sql | psql -U zabbix -W zabbix -c "COPY history (itemid, clock, value, ns) FROM stdin;"
Password for user zabbix:
98.6MiB 0:05:52 [ 292KiB/s] [==========>             ] 57% ETA 0:04:23
As you can see the import is going to take a while, before this I had no idea how far into the import Postgres actually was.

More information can be found here: http://www.ivarch.com/programs/pv.shtml

Tuesday, April 7, 2015

Monitoring Red Hat IdM's LDAP server with SNMP

Oh SNMP, how crazy art thou!

Overall this one isn't well documented, or at least not in one place, there are bits and pieces all over.  Because IdM (or IPA) is built on top of 389 Directory Server this means that the SNMP monitoring capability is there by default and once you've hit your head on the bumps it's overall pretty straight forward.

Install the packags

You'll only need to install the "net-snmp" package at a minimum, let yum handle any dependencies if needed, however if you want to be able to read the snmp variables you'll also need the "net-snmp-utils" package.

Configure net-snmp

I'm going to assume you're starting with a fresh install of net-snmp from RPM, if you have a pre-existing installation adapt the following as needed.  At the bottom of the /etc/snmp/snmpd.conf file I added the following:

rocommunity public
view systemview included .1.3.6.1.4.1.2312
master agentx
The first sets up the read only community, then we allow access to the Red Hat OIDs and finally enable the agentx protocol for the ldap agent.





Originally I didn't have the Red Hat OID enabled and grew a little frustrated at snmpwalk saying it had reached the end.

Configure the ldap-agent

The configuration file for the ldap-agent is found here: /etc/dirsrv/config/ldap-agent.conf

Open the file and you'll find that most of the options are configured as they need to be, however you will need to add a server entry.  In my case the following was used:
server slapd-EXAMPLE-COM
 To find your server name get a list of the directories in the /etc/dirsrv directory, and take the directory which starts with "slapd-" and use it as shown above.

Starting it all up

I haven't dug around too deeply at this point but it appears that the ldap-agent must be started manually on RHEL7 and IDM.  However that's a simple matter of the following:
 /usr/sbin/ldap-agent /etc/dirsrv/config/ldap-agent.conf
 Once the agent has started it will return.

Testing it

From another server I pointed snmpwalk at the server and wala, data!
[root@zabbix ~]# snmpwalk -v2c -Cp -On -c public ipa.example.com .1.3.6.1.4.1.2312
.1.3.6.1.4.1.2312.6.1.1.1.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.2.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.3.389 = Counter64: 38
.1.3.6.1.4.1.2312.6.1.1.4.389 = Counter64: 2237
.1.3.6.1.4.1.2312.6.1.1.5.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.6.389 = Counter64: 28111
.1.3.6.1.4.1.2312.6.1.1.7.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.8.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.9.389 = Counter64: 2
.1.3.6.1.4.1.2312.6.1.1.10.389 = Counter64: 2
.1.3.6.1.4.1.2312.6.1.1.11.389 = Counter64: 997
.1.3.6.1.4.1.2312.6.1.1.12.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.13.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.14.389 = Counter64: 18364
.1.3.6.1.4.1.2312.6.1.1.15.389 = Counter64: 92
.1.3.6.1.4.1.2312.6.1.1.16.389 = Counter64: 9543
.1.3.6.1.4.1.2312.6.1.1.17.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.18.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.19.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.20.389 = Counter64: 5981
.1.3.6.1.4.1.2312.6.1.1.21.389 = Counter64: 21
.1.3.6.1.4.1.2312.6.1.1.22.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.1.1.23.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.2.1.1.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.2.1.2.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.2.1.3.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.2.1.4.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.2.1.5.389 = Counter64: 0
.1.3.6.1.4.1.2312.6.5.1.1.389 = ""
.1.3.6.1.4.1.2312.6.5.1.2.389 = STRING: "389-Directory/1.3.3.1"
.1.3.6.1.4.1.2312.6.5.1.3.389 = ""
.1.3.6.1.4.1.2312.6.5.1.4.389 = ""
.1.3.6.1.4.1.2312.6.5.1.5.389 = ""
.1.3.6.1.4.1.2312.6.5.1.6.389 = ""
.1.3.6.1.4.1.2312.6.5.1.6.389 = No more variables left in this MIB View (It is past the end of the MIB tree)
 It is worth noting that at this time I couldn't get the Red Hat MIB to properly load so I had to use the OID numbers, but a little sluthing of the Red Hat MIB file (default location is /usr/share/dirsrv/mibs) and you can figure out which OIDs to monitor.

Now I need to play with it some more to see if there is much value in the data.

Wednesday, February 18, 2015

Levovo w540 and Fedora 21

It's been a quick succession of new portable systems.  For the last few years I've been rolling with a Mac Air, while I love the form factor and weight, I'm not a fan of the dwindling battery and 4GB memory limit.  To that end I decided to get a beefier laptop to allow me to run VM's and have a bigger screen to work with, thus the W540.

Since my day job is working for the Shadow Man as a consultant, I'm going to run Fedora or RHEL.

Overall I've found the process of getting Fedora 21 on the W540 more challenging than getting it on the Microsoft Surface Pro 3, go figure.  That's especially perplexing since Lenovo is the laptop vendor of choice for Red Hat.

Before upgrading to F21, I recommend installing any hardware updates available via the pre-installed Windows OS from the vendor.  After that I started using a USB flash drive with a live ISO and installed the OS using the normal means.  Do not be surprised if the computer freezes up at random moments.  For me it took a few attempts before I was able to get F21 installed, as best I can tell there is a bug with the Open Source Nvidia driver, nouveau.  Thus I would suggest focusing on getting X working, before updating the OS.  For me this involved removing the nouveau driver and trying to get the proprietary drivers working from RPM before I found the eventual solution below.  I don't have the time to go back and redo this from scratch, so hopefully my notes here can help someone else.

I tried installing the nvidia driver from RPMforge but that didn't work, Gnome kept showing a sad face and a message "and error has occurred."  In looking at the logs (journalctl) it looked like Gnome was failing to detect 3d acceleration and thus failing.  WTF, I was running the proprietary nvidia drivers from RPMforge, more digging was needed.

In many of my searches I kept seeing references to bumblebee, most often on the ArchLinux Wiki's.  A little digging suggested this was the way forward.  Bumblebee is a project for systems with two graphics drivers and one display, such as laptops, where the processor has a built in graphics controller and there's an external graphics controller as well.

Fortunately there's a very useful wiki page for this: http://fedoraproject.org/wiki/Bumblebee. After following the instructions listed everything came up without issue.  It is worth noting that in order to get bumblebee to install I did need to remove the nvidia drivers installed from RPMforge, I had also removed the nouveau driver at an earlier point as well.  I am not able to test, but I think it would be best to try installing Bumblebee before configuring RPMforge.

Now that X is working I can work on getting the rest of the laptop configured and copy over my old home directory.

Monday, February 9, 2015

Fedora 21 and the Microsoft Surface Pro 3

Recently I decided I'd try to upgrade from my MacbokAir running Fedora and thus I decided to try out the Surface Pro 3.  Overall I have to say I'm rather impressed, it's not perfect, and there are still some warts, but it's a functional option, in fact I'm writing this on the Surface now.

The basic steps are as follows:
  1. Shrink the windows volumes to allow for a space to install Fedora
  2. Put Fedora on to a USB Fash drive and boot the Surface Pro to the USB flash drive.
  3. Install Fedora
  4. Tweak and enjoy!
For the prep steps I found the following blog useful: http://winaero.com/blog/how-to-install-linux-on-surface-pro-3/. You can safely ignore most of the Distro specific steps with the exception of the firmware step.  Fortunately the cover keyboard and touchscreen and pen work out of the box with Fedora 21, however the touch pad mouse does not, although the buttons do.

After installing the OS, download the marvel firmware git repo and copy the contents to the appropriate directory:
$ git clone git://git.marvell.com/mwifiex-firmware.git
# mkdir -p /lib/firmware/mrvl/
# cp mwifiex-firmware/mrvl/* /lib/firmware/mrvl/

The Wifi device may be unstable before the firmware is updated, and bluetooth will not be available.

Next you will need to add a configureation for the touchpad mouse to work. It is worth noting that most instructions include the matching statement for the product.  I found that by removing this line the touchpad worked after updating the kernel.

/etc/X11/xorg.conf.d/10-touchpad.conf
Section "InputClass"
    Identifier "Surface Pro 3 Cover"
    MatchIsPointer "on"
    MatchDevicePath "/dev/input/event*"
    Driver "evdev"
    Option "vendor" "045e"
#    Option "product" "07dc"
    Option "IgnoreAbsoluteAxes" "True"
EndSection
If you have not already, update your OS "yum -y update" and reboot.  After rebooting you should have a working touchpad and bluetooth adapter along with a much more stable wifi connection.