jump to navigation

Snappy security December 10, 2014

Posted by jdstrand in canonical, security, ubuntu, ubuntu-server.
add a comment

Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

A developer doc will be published soon.

Application isolation with AppArmor – part IV June 6, 2014

Posted by jdstrand in canonical, security, ubuntu, ubuntu-server.
1 comment so far

Last time I discussed AppArmor, I talked about new features in Ubuntu 13.10 and a bit about ApplicationConfinement for Ubuntu Touch. With the release of Ubuntu 14.04 LTS, several improvements were made:

  • Mediation of signals
  • Mediation of ptrace
  • Various policy updates for 14.04, including new tunables, better support for XDG user directories, and Unity7 abstractions
  • Parser policy compilation performance improvements
  • Google Summer of Code (SUSE sponsored) python rewrite of the userspace tools

Signal and ptrace mediation

Prior to Ubuntu 14.04 LTS, a confined process could send signals to other processes (subject to DAC) and ptrace other processes (subject to DAC and YAMA). AppArmor on 14.04 LTS adds mediation of both signals and ptrace which brings important security improvements for all AppArmor confined applications, such as those in the Ubuntu AppStore and qemu/kvm machines as managed by libvirt and OpenStack.

When developing policy for signal and ptrace rules, it is important to remember that AppArmor does a cross check such that AppArmor verifies that:

  • the process sending the signal/performing the ptrace is allowed to send the signal to/ptrace the target process
  • the target process receiving the signal/being ptraced is allowed to receive the signal from/be ptraced by the sender process

Signal(7) permissions use the ‘signal’ rule with the ‘receive/send’ permissions governing signals. PTrace permissions use the ‘ptrace’ rule with the ‘trace/tracedby’ permissions governing ptrace(2) and the ‘read/readby’ permissions governing certain proc(5) filesystem accesses, kcmp(2), futexes (get_robust_list(2)) and perf trace events.

Consider the following denial:

Jun 6 21:39:09 localhost kernel: [221158.831933] type=1400 audit(1402083549.185:782): apparmor="DENIED" operation="ptrace" profile="foo" pid=29142 comm="cat" requested_mask="read" denied_mask="read" peer="unconfined"

This demonstrates that the ‘cat’ binary running under the ‘foo’ profile was unable to read the contents of a /proc entry (in my test, /proc/11300/environ). To allow this process to read /proc entries for unconfined processes, the following rule can be used:

ptrace (read) peer=unconfined,

If the receiving process was confined, the log entry would say ‘peer=”<profile name>”‘ and you would adjust the ‘peer=unconfined’ in the rule to match that in the log denial. In this case, because unconfined processes implicitly can be readby all other processes, we don’t need to specify the cross check rule. If the target process was confined, the profile for the target process would need a rule like this:

ptrace (readby) peer=foo,

Likewise for signal rules, consider this denial:

Jun 6 21:53:15 localhost kernel: [222005.216619] type=1400 audit(1402084395.937:897): apparmor="DENIED" operation="signal" profile="foo" pid=29069 comm="bash" requested_mask="send" denied_mask="send" signal=term peer="unconfined"

This shows that ‘bash’ running under the ‘foo’ profile tried to send the ‘term’ signal to an unconfined process (in my test, I used ‘kill 11300′) and was blocked. Signal rules use ‘read’ and ‘send to determine access, so we can add a rule like so to allow sending of the signal:

signal (send) set=("term") peer=unconfined,

Like with ptrace, a cross-check is performed with signal rules but implicit rules allow unconfined processes to send and receive signals. If pid 11300 were confined, you would adjust the ‘peer=’ in the rule of the foo profile to match the denial in the log, and then adjust the target profile to have something like:

signal (receive) set=("term") peer=foo,

Signal and ptrace rules are very flexible and the AppArmor base abstraction in Ubuntu 14.04 LTS has several rules to help make profiling and transitioning to the new mediation easier:

# Allow other processes to read our /proc entries, futexes, perf tracing and
# kcmp for now
ptrace (readby),
 
# Allow other processes to trace us by default (they will need
# 'trace' in the first place). Administrators can override
# with:
# deny ptrace (tracedby) ...
ptrace (tracedby),
 
# Allow unconfined processes to send us signals by default
signal (receive) peer=unconfined,
 
# Allow us to signal ourselves
signal peer=@{profile_name},
 
# Checking for PID existence is quite common so add it by default for now
signal (receive, send) set=("exists"),

Note the above uses the new ‘@{profile_name}’ AppArmor variable, which is particularly handy with ptrace and signal rules. See man 5 apparmor.d for more details and examples.

14.10

Work still remains and some of the things we’d like to do for 14.10 include:

  • Finishing mediation for non-networking forms of IPC (eg, abstract sockets). This will be done in time for the phone release.
  • Have services integrate with AppArmor and the upcoming trust-store to become trusted helpers (also for phone release)
  • Continue work on netowrking IPC (for 15.04)
  • Continue to work with the upstream kernel on kdbus
  • Work continued on LXC stacking and we hope to have stacked profiles within the current namespace for 14.10. Full support for stacked profiles where different host and container policy for the same binary at the same time should be ready by 15.04
  • Various fixes to the python userspace tools for remaining bugs. These will also be backported to 14.04 LTS

Until next time, enjoy!

fsck, mountall, /var and Lucid August 4, 2010

Posted by jdstrand in ubuntu, ubuntu-server.
3 comments

So yesterday I rebooted a Lucid server I administer, and fsck ran. Ok, that’s cool. Granted it takes about 45 minutes on my RAID1 terabyte filesystem, but so be it. As in the past, I was slightly annoyed again that upstart/plymouth did not tell me that it was fscking my drive like my desktop does (may be it’s because this was an upgrade from Hardy and not a fresh install? It would be nice if I looked into why), but I knew that was what was happening, so I went about my business .

Until… there was a failure that had to be manually resolved by fsck. Looking at the path, it was no big deal (easily restoreable), so I just needed to run ‘fsck /dev/md2′. Hmmm, /dev/md2 is /var on this system, and mountall got stuck cause /var couldn’t be mounted. Getting slightly more annoyed, I tried to reboot with ‘Ctrl+Alt+Del’, but that didn’t work, so I had to SysRq my way out (using Alt+SysRq+R, Alt+SysRq+S, Alt+SysRq+U, and finally Alt+SysRq+B) and boot into single usermode. Surely I could reboot to runlevel 1 and get a prompt…. 45 minutes later (ie, after another failed fsck on /var) I was shown to be wrong. Thankfully I had an amd64 10.04 Server CD handy and booted into rescue mode, which in the end worked fine for fscking my drive manually.

Why was running fsck manually so hard? Why is plymouth/upstart so quiet on my server?

It turns out because I removed ‘splash quiet’ from my kernel boot options, plymouth wouldn’t show the message to ‘S’ (skip) or ‘M’ (maunually recover) /var. It was still running, so I could press ‘m’ to get to a maintenance shell (you might need to change your tty for this).

For the plymouth/upstart silence I came across the following:

In short, the above has you add to /etc/modprobe.d/blacklist-custom.conf:
blacklist vga16fb

Then adjust your kernel boot line to have:
ro nosplash INIT_VERBOSE=yes init=/sbin/init noplymouth -v

This is not as pretty as SysV bootup, but is verbose enough to let you know what is happening. The problem is that because there is no plymouth, there is no way enter a maintenance shell when you need to, so the above is hard to recommend. :(

To me, it boils down to the following two choices:

  • Boot with ‘splash’ but without ‘quiet’ and lose boot messages but gain fsck feedback
  • Boot without ‘splash quiet’ and lose fsck feedback and remember you can press ‘m’ to enter a maintenance shell when there is a problem

It would really be nice to have both fsck feedback and no splash, but this doesn’t seem possible at this time. If someone knows a way to do this on Lucid, please let me know. In the meantime, I have filed bug #613562.

ClamAV update May 7, 2010

Posted by jdstrand in security, ubuntu, ubuntu-server.
1 comment so far

Upstream ClamAV pushed out an update via freshclam that crashed versions of 0.95 and earlier on 32 bit systems (Ubuntu 9.10 and earlier are affected). Upstream issued an update via freshclam within 15 minutes, but affected users’ clamd daemon will not restart automatically. People running ClamAV should check that it is still running. For details see:

http://lurker.clamav.net/message/20100507.110656.573e90d7.en.html

AppArmor sVirt security driver for libvirt November 3, 2009

Posted by jdstrand in security, ubuntu, ubuntu-server.
Tags:
4 comments

Background
Ubuntu has been using libvirt as part of its recommended virtualization management toolkit since Ubuntu 8.04 LTS. In short, libvirt provides a set of tools and APIs for managing virtual machines (VMs) such as creating a VM, modifying its hardware configuration, starting and stopping VMs, and a whole lot more. For more information on using libvirt in Ubuntu, see http://doc.ubuntu.com/ubuntu/serverguide/C/libvirt.html.

Libvirt greatly eases the deployment and management of VMs, but due to the fact that it has traditionally been limited to using POSIX ACLs and sometimes needs to perform privileged actions, using libvirt (or any virtualization technology for that matter) can create a security risk, especially when the guest VM isn’t trusted. It is easy to imagine a bug in the hypervisor which would allow a compromised VM to modify other guests or files on the host. Considering that when using qemu:///system the guest VM process runs as a root (this is configurable in 0.7.0 and later, but still the default in Fedora and Ubuntu 9.10), it is even more important to contain what a guest can do. To address these issues, SELinux developers created a fork of libvirt, called sVirt, which when using kvm/qemu, allows libvirt to add SELinux labels to files required for the VM to run. This work was merged back into upstream libvirt in version 0.6.1, and it’s implementation, features and limitations can be seen in a blog post by Dan Walsh and an article on LWN.net. This is inspired work, and the sVirt folks did a great job implementing it by using a plugin framework so that others could create different security drivers for libvirt.

AppArmor Security Driver for Libvirt
While Ubuntu has SELinux support, by default it uses AppArmor. AppArmor is different from SELinux and some other MAC systems available on Linux in that it is path-based, allows for mixing of enforcement and complain mode profiles, uses include files to ease development, and typically has a far lower barrier to entry than other popular MAC systems. It has been an important security feature of Ubuntu since 7.10, where CUPS was confined by AppArmor in the default installation (more profiles have been added with each new release).

Since virtualization is becoming more and more prevalent, improving the security stance for libvirt users is of primary concern. It was very natural to look at adding an AppArmor security driver to libvirt, and as of libvirt 0.7.2 and Ubuntu 9.10, users have just that. In terms of supported features, the AppArmor driver should be on par with the SELinux driver, where the vast majority of libvirt functionality is supported by both drivers out of the box.

Implementation
First, the libvirtd process is confined with a lenient profile that allows the libvirt daemon to launch VMs, change into another AppArmor profile and use virt-aa-helper to manipulate AppArmor profiles. virt-aa-helper is a helper application that can add, remove, modify, load and unload AppArmor profiles in a limited and restricted way. Specifically, libvirtd is not allowed to adjust anything in /sys/kernel/security directly, or modify the profiles for the virtual machines directly. Instead, libvirtd must use virt-aa-helper, which is itself run under a very restrictive AppArmor profile. Using this architecture helps prevent any opportunities for a subverted libvirtd to change its own profile (especially useful if the libvirtd profile is adjusted to be restrictive) or modify other AppArmor profiles on the system.

Next, there are several profiles that comprise the system:

  • /etc/apparmor.d/usr.sbin.libvirtd
  • /etc/apparmor.d/usr.bin.virt-aa-helper
  • /etc/apparmor.d/abstractions/libvirt-qemu
  • /etc/apparmor.d/libvirt/TEMPLATE
  • /etc/apparmor.d/libvirt/libvirt-<uuid>
  • /etc/apparmor.d/libvirt/libvirt-<uuid>.files

/etc/apparmor.d/usr.sbin.libvirtd and /etc/apparmor.d/usr.bin.virt-aa-helper define the profiles for libvirtd and virt-aa-helper (note that in libvirt 0.7.2, virt-aa-helper is located in /usr/lib/libvirt/virt-aa-helper). /etc/apparmor.d/libvirt/TEMPLATE is consulted when creating a new profile when one does not already exist. /etc/apparmor.d/abstractions/libvirt-qemu is the abstraction shared by all running VMs. /etc/apparmor.d/libvirt/libvirt-<uuid> is the unique base profile for an individual VM, and /etc/apparmor.d/libvirt/libvirt-<uuid>.files contains rules for the guest-specific files required to run this individual VM.

The confinement process is as follows (assume the VM has a libvirt UUID of ‘a22e3930-d87a-584e-22b2-1d8950212bac’):

  1. When libvirtd is started, it determines if it should use a security driver. If so, it checks which driver to use (eg SELinux or AppArmor). If libvirtd is confined by AppArmor, it will use the AppArmor security driver
  2. When a VM is started, libvirtd decides whether to ask virt-aa-helper to create a new profile or modify an existing one. If no profile exists, libvirtd asks virt-aa-helper to generate the new base profile, in this case /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac, which it does based on /etc/apparmor.d/libvirt/TEMPLATE. Notice, the new profile has a profile name that is based on the guest’s UUID. Once the base profile is created, virt-aa-helper works the same for create and modify: virt-aa-helper will determine what files are required for the guest to run (eg kernel, initrd, disk, serial, etc), updates /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files, then loads the profile into the kernel.
  3. libvirtd will proceed as normal at this point, until just before it forks a qemu/kvm process, it will call aa_change_profile() to transition into the profile ‘libvirt-a22e3930-d87a-584e-22b2-1d8950212bac’ (the one virt-aa-helper loaded into the kernel in the previous step)
  4. When the VM is shutdown, libvirtd asks virt-aa-helper to remove the profile, and virt-aa-helper unloads the profile from the kernel

It should be noted that due to current limitations of AppArmor, only qemu:///system is confined by AppArmor. In practice, this is fine because qemu:///session is run as a normal user and does not have privileged access to the system like qemu:///system does.

Basic Usage
By default in Ubuntu 9.10, both AppArmor and the AppArmor security driver for libvirt are enabled, so users benefit from the AppArmor protection right away. To see if libvirtd is using the AppArmor security driver, do:

$ virsh capabilities
Connecting to uri: qemu:///system
<capabilities>
 <host>
  ...
  <secmodel>
    <model>apparmor</model>
    <doi>0</doi>
  </secmodel>
 </host>
 ...
</capabilities>

Next, start a VM and see if it is confined:

$ virsh start testqemu
Connecting to uri: qemu:///system
Domain testqemu started

$ virsh domuuid testqemu
Connecting to uri: qemu:///system
a22e3930-d87a-584e-22b2-1d8950212bac

$ sudo aa-status
apparmor module is loaded.
16 profiles are loaded.
16 profiles are in enforce mode.
...
  /usr/bin/virt-aa-helper
  /usr/sbin/libvirtd
  libvirt-a22e3930-d87a-584e-22b2-1d8950212bac
...
0 profiles are in complain mode.
6 processes have profiles defined.
6 processes are in enforce mode :
...
  libvirt-a22e3930-d87a-584e-22b2-1d8950212bac (6089)
...
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

$ ps ww 6089
PID TTY STAT TIME COMMAND
6089 ? R 0:00 /usr/bin/qemu-system-x86_64 -S -M pc-0.11 -no-kvm -m 64 -smp 1 -name testqemu -uuid a22e3930-d87a-584e-22b2-1d8950212bac -monitor unix:/var/run/libvirt/qemu/testqemu.monitor,server,nowait -boot c -drive file=/var/lib/libvirt/images/testqemu.img,if=ide,index=0,boot=on -drive file=,if=ide,media=cdrom,index=2 -net nic,macaddr=52:54:00:86:5b:6e,vlan=0,model=virtio,name=virtio.0 -net tap,fd=17,vlan=0,name=tap.0 -serial none -parallel none -usb -vnc 127.0.0.1:1 -k en-us -vga cirrus

Here is the unique, restrictive profile for this VM:

$ cat /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac
#
# This profile is for the domain whose UUID
# matches this file.
#
 
#include <tunables/global>
 
profile libvirt-a22e3930-d87a-584e-22b2-1d8950212bac {
   #include <abstractions/libvirt-qemu>
   #include <libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files>
}

$ cat /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files
# DO NOT EDIT THIS FILE DIRECTLY. IT IS MANAGED BY LIBVIRT.
  "/var/log/libvirt/**/testqemu.log" w,
  "/var/run/libvirt/**/testqemu.monitor" rw,
  "/var/run/libvirt/**/testqemu.pid" rwk,
  "/var/lib/libvirt/images/testqemu.img" rw,

Now shut it down:

$ virsh shutdown testqemu
Connecting to uri: qemu:///system
Domain testqemu is being shutdown

$ virsh domstate testqemu
Connecting to uri: qemu:///system
shut off

$ sudo aa-status | grep 'a22e3930-d87a-584e-22b2-1d8950212bac'
[1]

Advanced Usage
In general, you can forget about AppArmor confinement and just use libvirt like normal. The guests will be isolated from each other and user-space protection for the host is provided. However, the design allows for a lot of flexibility in the system. For example:

  • If you want to adjust the profile for all future, newly created VMs, adjust /etc/apparmor.d/libvirt/TEMPLATE
  • If you need to adjust access controls for all VMs, new or existing, adjust /etc/apparmor.d/abstractions/libvirt-qemu
  • If you need to adjust access controls for a single guest, adjust /etc/apparmor.d/libvirt-<uuid>, where <uuid> is the UUID of the guest
  • To disable the driver, either adjust /etc/libvirt/qemu.conf to have ‘security_driver = “none”‘ or remove the AppArmor profile for libvirtd from the kernel and restart libvirtd

Of course, you can also adjust the profiles for libvirtd and virt-aa-helper if desired. All the files are simple text files. See AppArmor for more information on using AppArmor in general.

Limitations and the Future
While the sVirt framework provides good guest isolation and user-space host protection, the framework does not provide protection against in-kernel attacks (eg, where a guest process is able to access the host kernel memory). The AppArrmor security driver as included in Ubuntu 9.10 also does not handle access to host devices as well as it could. Allowing a guest to access a local pci device or USB disk is a potentially dangerous operation anyway, and the driver will block this access by default. Users can work around this by adjusting the base profile for the guest.

There are few missing features in the sVirt model, such as labeling state files. The AppArmor driver also needs to better support host devices. Once AppArmor provides the ability for regular users to define profiles, then qemu:///session can be properly supported. Finally, it will be great when distributions take advantage of libvirt’s recently added ability to run guests as non-root when using qemu:///system (while the sVirt framework largely mitigates this risk, it is good to have security in depth).

Summary
While cloud computing feels like it is talked about everywhere and virtualization becoming even more important in the data center, leveraging technologies like libvirt and AppArmor is a must. Virtualization removes the traditional barriers afforded to stand-alone computers, thus increasing the attack surface for hostile users and compromised guests. By using the sVirt framework in libvirt, and in particular AppArmor on Ubuntu 9.10, administrators can better defend themselves against virtualization-specific attacks. Have fun and be safe!

More Information
http://libvirt.org/
https://wiki.ubuntu.com/AppArmor
https://wiki.ubuntu.com/SecurityTeam/Specifications/AppArmorLibvirtProfile

Serving up Sftp and AppArmor September 9, 2009

Posted by jdstrand in security, ubuntu, ubuntu-server.
Tags:
10 comments

Recently I decided to replace NFS on a small network with something that was more secure and more resistant to network failures (as in, Nautilus wouldn’t hang because of a symlink to a directory in an autofs mounted NFS share). Most importantly, I wanted something that was secure, simple and robust. I naturally thought of SFTP, but there were at least two problems with a naive SFTP implementation, both of which I decided I must solve to meet the ‘secure’ criteria:

  • shell access to the file server
  • SFTP can’t restrict users to a particular directory

Of course, there are other alternatives that I could have pursued: sshfs, shfs, SFTP with scponly, patching OpenSSH to support chrooting (see ‘Update’ below), NFSv4, IPsec, etc. Adding all the infrastructure to properly support NFSv4 and IPsec did not meet the simplicity requirement, and neither did running a patched OpenSSH server. sshfs, shfs, and SFTP with scponly did not really fit the bill either.

What did I come up with? A combination of SFTP, a hardened sshd configuration and AppArmor on Ubuntu.

SFTP setup
This was easy because sftp is enabled by default on Ubuntu and Debian systems. This should be enough:

$ sudo apt-get install openssh-server
Just make sure you have the following set in /etc/ssh/sshd_config (see man sshd_config for details).

Subsystem sftp /usr/lib/openssh/sftp-server

With that in place, all I needed to do was add users with strong passwords:

$ sudo adduser sftupser1
$ sudo adduser sftpuser2

Then you can test if it worked with:

$ sftp sftpuser1@server
sftp> ls /
/bin ...

Hardened sshd
Now that SFTP is working, we need to limit access. One way to do this is via a Match rule that uses a ForceCommand. Combined with AllowUsers, adding something like this to /etc/ssh/sshd_config is pretty powerful:

AllowUsers adminuser sftpuser1@192.168.0.10 sftpuser2
Match User sftpuser1,sftpuser2
    AllowTcpForwarding no
    X11Forwarding no
    ForceCommand /usr/lib/openssh/sftp-server -l INFO

Remember to restart ssh with ‘/etc/init.d/ssh restart’. The above allows normal shell access for the adminuser, SFTP-only access to sftpuser1 from 192.168.0.10 and to sftupser2 from anywhere. One can imagine combining this with ‘PasswordAuthentication no’ or GSSAPI to enforce more stringent authentication so access is even more tightly controlled.

AppArmor
The above does a lot to increase security over the standard NFS shares that I had before. Good encryption, strong authentication and reliable UID mappings for DAC (POSIX discretionary access controls) are all in place. However, it doesn’t have the ability to confine access to a certain directory like NFS does. A simple AppArmor profile can achieve this and give even more granularity than just DAC. Imagine the following directory structure:

  • /var/exports (top-level ‘exported’ filesystem (where all the files you want to share are))
  • /var/exports/users (user-specific files, only to be accessed by the users themselves)
  • /var/exports/shared (a free-for-all shared directory where any user can put stuff. The ‘shared’ directory has ‘2775’ permissions with group ‘shared’)

Now add to /etc/apparmor.d/usr.lib.openssh.sftp-server (and enable with ‘sudo apparmor_parser -r /etc/apparmor.d/usr.lib.openssh.sftp-server’):

#include <tunables/global>
/usr/lib/openssh/sftp-server {
  #include <abstractions/base>
  #include <abstractions/nameservice>
 
  # Served files
  # Need read access for every parent directory
  / r,
  /var/ r,
  /var/exports/ r,
 
  /var/exports/**/ r,
  owner /var/exports/** rwkl,
 
  # don't require ownership to read shared files
  /var/exports/shared/** rwkl,
}

This is a very simple profile for sftp-server itself, with access to files in /var/exports. Notice that by default the owner must match for any files in /var/exports, but in /var/exports/shared it does not. AppArmor works in conjunction with DAC, so that if DAC denies access, AppArmor is not consulted. If DAC permits access, then AppArmor is consulted and may deny access.

Summary
This is but one man’s implementation for a simple, secure and robust file service. There are limitations with the method as described, notably managing the sshd_config file and not supporting some traditional setups such as $HOME on NFS. That said, with a little creativity, a lot of possibilities exist for file serving with this technique. For my needs, the combination of standard OpenSSH and AppArmor on Ubuntu was very compelling. Enjoy!

Update
OpenSSH 4.8 and higher (available in Ubuntu 8.10 and later) contains the ChrootDirectory option, which may be enough for certain environments. It is simpler to setup (ie AppArmor is not required), but doesn’t have the same granularity and sftp-server protection that the AppArmor method provides. See comment 32 and comment 34 for details. Combining ChrootDirectory and AppArmor would provide even more defense in depth. It’s great to have so many options for secure file sharing! :)

AppArmor now available in Karmic — testing needed July 15, 2009

Posted by jdstrand in security, ubuntu, ubuntu-server.
Tags:
6 comments

After a lot of hard work by John Johansen and the Ubuntu kernel team, bug #375422 is well on its way to be fixed. More than just forward ported for Ubuntu, AppArmor has been reworked to use the updated kernel infrastructure for LSMs. As seen in #apparmor on Freenode a couple of days ago:

11:24 < jjohansen> I am working to a point where I can try upstreaming again, base off of the security_path_XXX patches instead of the vfs patches
11:24 < jjohansen> so the module is mostly self contained again

These patches are in the latest 9.10 kernel, and help testing AppArmor in Karmic is needed. To get started, verify you have at least 2.6.31-3.19-generic:

$ cat /proc/version_signature
Ubuntu 2.6.31-3.19-generic

AppArmor will be enabled by default for Karmic just like in previous Ubuntu releases, but it is off for now until a few kinks are worked out. To test it right away, you’ll need to reboot, adding ‘security=apparmor’ to the kernel command line. Then fire up ‘aa-status’ to see if it is enabled. A fresh install of 9.10 as of today should look something like:

$ sudo aa-status
apparmor module is loaded.
8 profiles are loaded.
8 profiles are in enforce mode.
/usr/lib/connman/scripts/dhclient-script
/usr/share/gdm/guest-session/Xsession
/usr/sbin/tcpdump
/usr/lib/cups/backend/cups-pdf
/sbin/dhclient3
/usr/sbin/cupsd
/sbin/dhclient-script
/usr/lib/NetworkManager/nm-dhcp-client.action
0 profiles are in complain mode.
2 processes have profiles defined.
2 processes are in enforce mode :
/sbin/dhclient3 (3271)
/usr/sbin/cupsd (2645)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Please throw all your crazy profiles at it as well as testing the packages with existing profiles, then file bugs:

  • For the kernel, add your comments (positive and negative) to bug #375422
  • AppArmor tools bugs should be filed with ‘ubuntu-bug apparmor’
  • Profile bugs should be filed against the individual source package with ‘ubuntu-bug <source package name>’. See DebuggingApparmor for details.

Thank you Ubuntu Kernel team and especially John for all the hard work on getting the “MAC system for human beings” (as I like to call it) not only working again, but upstreamable — this is really great stuff! :)

Should I disable AppArmor? July 7, 2009

Posted by jdstrand in security, ubuntu, ubuntu-server.
Tags:
16 comments

Short answer: Of course not!

Longer answer:
Ubuntu has had AppArmor1 on by default for a while now, and with each new release more and more profiles are added. The Ubuntu community has worked hard to make the installed profiles work well, and by far and large, most people happily use their Ubuntu systems without noticing AppArmor is even there.

Of course, like with any software, there are bugs. I know these AppArmor profile bugs can be frustrating, but because AppArmor is a path-based system, diagnosing, fixing and even working around profile bugs is actually quite easy. AppArmor has the ability to disable specific profiles rather than simply turning it on or off, yet I’ve seen people in IRC and forums advise others to disable AppArmor completely. This is totally misguided and YOU SHOULD NEVER DISABLE APPARMOR ENTIRELY to work around a profiling problem. That is like trying to open your front door with dynamite– it will work, but it’ll leave a big hole and you’ll likely hurt yourself. Think about it, on my regular ol’ Jaunty laptop system, I have 4 profiles in place installed via Ubuntu packages (to see the profiles on your system, look in /etc/apparmor.d). Why would I want to disable all of AppArmor (and therefore all of those profiles) instead of dealing with just the one that is causing me problems? Obviously, the more software you install with AppArmor protection, the more you have to lose by disabling AppArmor completely.

So, when dealing with a profile bug, there are only a few things you need to know:

  1. AppArmor messages show up in /var/log/kern.log (by default)
  2. AppArmor profiles are located in /etc/apparmor.d
  3. The profile name is, by convention, <absolute path with ‘/’ replaced by ‘.’>. E.g ‘/etc/apparmor.d/sbin.dhclient3′ is the profile for ‘/sbin/dhclient3′.
  4. Profiles are simple text files

With this in mind, let’s say tcpdump is misbehaving. You can check /var/log/kern.log for entries like:

Jul 7 12:21:15 laptop kernel: [272480.175323] type=1503 audit(1246987275.634:324): operation="inode_create" requested_mask="a::" denied_mask="a::" fsuid=0 name="/opt/foo.out" pid=24113 profile="/usr/sbin/tcpdump"

That looks complicated, but it isn’t really, and it tells you everything you need to know to file a bug and fix the problem yourself. Specifically, “/usr/sbin/tcpdump” was denied “a” access to “/opt/foo.out”.

So now what?

If you are using the program with a default configuration or non-default but common configuration, then by all means, file a bug. If unsure, ask on IRC, on a mailing list or just file it anyway.

If you are a non-technical user or just need to put debugging this issue on hold, then you can disable this specific profile (there are others ways of doing this, but this method works best):

$ sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.tcpdump
$ sudo ln -s /etc/apparmor.d/usr.sbin.tcpdump /etc/apparmor.d/disable/usr.sbin.tcpdump

What that does is remove the profile for tcpdump from the kernel, then disables the profile such that AppArmor won’t load it when it is started (eg, on reboot). Now you can use the application without AppArmor protection, but leaving all those other applications with profiles protected.

If you are technically minded, dive into /etc/apparmor.d/usr.sbin.tcpdump and adjust the profile, then reload it:

$ sudo <your favorite editor> /etc/apparmor.d/usr.sbin.tcpdump
$ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.tcpdump

This will likely be an iterative process, but you can base your new or updated rules on what is already the profile– it is pretty straightforward. After a couple of times, it will be second nature and you might want to start contributing to developing new profiles. Once the profile is working for you, please add your proposed fix to the bug report you filed earlier.

The DebuggingApparmor page has information on how to triage, fix and work-around AppArmor profile bugs. To learn more about AppArmor and the most frequently used access rules, install the apparmor-docs package, and read /usr/share/doc/apparmor-docs/techdoc.pdf.gz.

1. For those of you who don’t know, AppArmor is a path-based (as opposed to SELinux, which is inode-based) mandatory access control (MAC) system that limits access a process has to a predefined set of files and operations. These access controls are known as ‘profiles’ in AppArmor parlance.

ext4 on Ubuntu 9.04 July 4, 2009

Posted by jdstrand in ubuntu, ubuntu-server.
Tags:
12 comments

It all started back in the good ol’ days of the Jaunty development cycle when I heard this new fangled filesystem thingy called ext4 was going to be an option in Jaunty. It claimed to be faster with much shorter fsck times. So, like any good Ubuntu developer, I tried it. It was indeed noticeably faster and fsck times were much improved.

The honeymoon was over when I ended up hitting bug #317781. That was no fun as it ate several virtual machines and quite a few other things (I had backups for all but the VMs). This machine is on a UPS, uses raid1, and on modern hardware (dual core Intel system with 4GB of RAM). In other words, this is not some flaky system but one that normally is only rebooted when there is a kernel upgrade (well, that is a white lie, but you get the point– it is a stable machine). According to Ted Ts’o, I shouldn’t be seeing this. Frustrated, I spent the better part of a weekend shuffling disks around to try to move my data off of ext4 and reformat the drive back to ext3. I was, how shall I put it, disappointed.

Some welcome patches were applied to Jaunty’s kernel soon after that to make ext4 behave more like ext3. By all accounts this stops ext4 from eating files under adverse conditions. So, now not only does the filesystem perform well, it doesn’t eat files. Life was good…

… until I noticed that under certain conditions I would get a total system freeze. Naturally, there was nothing in the logs (something I always appreciate ;). I thought it might have been several things, but I couldn’t prove any of them. Yesterday, however, I was able to reliably freeze my system. Basically, I was compiling a Jaunty kernel (2.6.28) in a schroot and using this command:

$ CONCURRENCY_LEVEL=2 AUTOBUILD=1 NOEXTRAS=1 skipabi=true fakeroot debian/rules binary-generic
Things were going along fine until I tried to delete a ton of files in a deep directory during the compile, then it would freeze. I was able to reproduce this 3 times in a row. Finally, I shuffled some things around and put my /home partition back on ext3 and I could not reproduce the freeze. There are several bugs in Launchpad talking about ext4 and system freezes, and after a bit more research I will add my comments, but for now, I am simply hopeful I will not see the freezes any more.

To be fair, ext4 is not the default filesystem on 9.04, and while it is supposed to be in 9.10, people I know running ext4 on 9.10 aren’t seeing these problems (yes, I’ve asked around). I do continue to use ext4 for ‘/’ on Jaunty systems with a separate /home partition as ext3, because it really does perform better, and this seems to be a good compromise. Having been burned by ext4 a couple of times now, I think it’ll probably be awhile before I trust ext4 for my important data though. Time will tell. :)

ufw status June 26, 2009

Posted by jdstrand in security, ubuntu, ubuntu-server.
Tags:
2 comments

While not exactly news as it happened sometime last month, ufw is now in Debian and is even available in Squeeze. What is new is that the fine folks in Debian have started to translate the debconf strings in ufw, and during the process the strings are much better. Thanks Debian!

In other news, ufw trunk now has support for filtering by interface. To use it, do something like:

$ sudo ufw allow in on eth0 to any port 80

See the man page for more information. This feature will be in ufw 0.28 which is targeted for Ubuntu Karmic and I also hope to add egress filtering this cycle. I haven’t started on egress filtering yet, but I have a good idea on how to proceed. Stay tuned!

Follow

Get every new post delivered to your Inbox.