Running owncloud on Gentoo stable

As I migrated to clean data layout (see previous post) I decided to be cool&trendy guy and fire up my own lovely cloudy service.

First my thinking was bit off regular setup, because even if we have in-tree ebuild of owncloud it hard-requires apache, which I find overkill here.

So I introduce you to secret approach how to make it work with ngnix and sqlite3. Before you say that I should use *insertothercooldbname* please think of that my deployment is only for handfull users and I tested it with 5 users connected at once each of them having access to 1 tb shared datastore and it proven fast enough.

Preparing keywords/useflags/etc

Well owncloud is testing, so unmask it:

scarabeus@htpc: /etc/portage $ cat package.keywords/own-cloud

We need dav for direct access and php stuff for the setup (some useflags might be useless or redundant):

scarabeus@htpc: /etc/portage $ cat package.use/own-cloud
dev-lang/php pdo sqlite3 curl xmlwriter gd truetype cgi force-cgi-redirect fpm
www-servers/nginx nginx_modules_http_dav

Now silently punt the apache away as we love nginx:

scarabeus@htpc: /etc/portage $ cat make.profile/package.provided

And put all this to good use by emerging required stuff:

emerge -v www-servers/nginx www-apps/owncloud

Setting up the stuff

As nginx does not have any fcgi we will use the fpm from php directly. For that we need to add it to runlevel rc-update add php-fpm default and set up a bit default number of spawned servers (config is in /etc/php/fpm-php5.4/php-fpm.conf). Also remeber to set there proper user/group there, or you won’t be able to store content in your cloud, just read from it.

Then we set up the nginx (/etc/nginx/nginx.conf and /etc/nginx/fastcgi_params). To keep this short and easy I will just post the config I used and let you to google for other nginx variables.
First the conf file:

        server {
                listen 80;
                server_name hostname;
                rewrite ^ https://$server_name$request_uri? permanent;  # enforce https

        server {
                listen 443;
                server_name hostname;

                ssl on;
                ssl_certificate /etc/ssl/nginx/nginx.crt;
                ssl_certificate_key /etc/ssl/nginx/nginx.key;

                access_log /var/log/nginx/htpc.access_log main;
                error_log /var/log/nginx/htpc.error_log info;

                root /var/www/htpc/htdocs/owncloud/;

                client_max_body_size 8M;
                create_full_put_path on;
                dav_access user:rw group:rw all:r;

                index index.php;

                location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
                        deny all;

                location / {
                        rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
                        rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
                        rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
                        rewrite ^/apps/calendar/caldav.php /remote.php/caldav/ last;
                        rewrite ^/apps/contacts/carddav.php /remote.php/carddav/ last;
                        rewrite ^/apps/([^/]*)/(.*\.(css|php))$ /index.php?app=$1&getfile=$2 last;
                        rewrite ^/remote/(.*) /remote.php/$1 last;

                        try_files $uri $uri/ @webdav;

                location @webdav {
                        fastcgi_split_path_info ^(.+\.php)(/.*)$;
                        include fastcgi_params;
                        fastcgi_param HTTPS on;

                location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
                        expires 30d;
                        access_log off;

                location ~ \.php$ {
                        fastcgi_split_path_info ^(.+\.php)(/.*)$;
                        include fastcgi_params;
                        fastcgi_index index.php;
                        fastcgi_intercept_errors on;
                        try_files $uri =404;

For the fcgi we also need some params to make the webdav work:

fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param   SCRIPT_NAME     $fastcgi_script_name;
fastcgi_param   PATH_INFO       $fastcgi_path_info;

That should be it, now we just deploy the owncloud to our webserver by webapp-config:

/usr/sbin/webapp-config -I -h htpc -u root -d /owncloud owncloud 4.0.7

After we start up the webserver and fcgi provider, we should be up and running to open the stuff in web browsers.

Few issues I didn’t manage to sort out in owncloud

  • External module to load all system users into it does not pass the auth
  • Google sync just timeouts everytime I try it (I maybe have just damn huge content here)
  • External storage support from within owncloud didn’t work for me, I just symlinked the data folder to the proper places under each user and logged into them in browser, then waited for 3 hours (1tb of data to index) and they were able to access everything.

Migrating disk layout from mess to raid1

Imagine you are dumb guy like me, first what I did was to set up 3 1TB disks into one huge LVM copied data on it and then found out that grub2 needs more free space before the first partition to be able to load the LVM module and boot. For a while I solved this with external USB token plugged in the motherboard. But I said no more!

I bought two 3TB disks to deal with the situation, and this time I decided to do everything right and add UEFI boot instead of normal good old booting.

Disk layout

Model: ATA ST3000VX000-9YW1 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      17.4kB  512MB   512MB   fat32        primary
 2      512MB   20.0GB  19.5GB               primary
 3      20.0GB  30.0GB  9999MB  xfs          primary
 4      30.0GB  3001GB  2971GB  xfs          primary

So as you can see I created 4 partitions. First is special case and it must be always created for EFI boot. Create it larger than 200 megs, up to 500, which should be enough for everyone.

The disk layout must be set up in parted as we want GPT layout (just google how to do it, it is damn easy to use), It accept both values like 1M, 1T and percetage like 4% to specify the resulting partition size.

Setting up the RAID

We just create simple nodes and plug /dev/sda2-4 and /dev/sdb2-4 to them. Prior creating the RAID make sure you have RAID support in your kernel.

for i in {2..4}; do mknod /dev/md${i} b 9 ${i}; mdadm --create /dev/md${i} --level=1 --raid-devices=2 /dev/sda${i} /dev/sdb${i}; done

After these commands are executed we have to watch mdstat until it is prepared (note that you can work with the md disks in the meantime, just the setting of the RAID will be slower as you will be writting on the named disks.

After we check the mdstat and see that all the disks are ready for play:

croot@htpc: ~ # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
md4 : active raid1 sda4[0] sdb4[1]
      2900968312 blocks super 1.2 [2/2] [UU]
md3 : active raid1 sda3[0] sdb3[1]
      9763768 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sda2[0] sdb2[1]
      19030679 blocks super 1.2 [2/2] [UU]

we can proceed with data copying.

Transfering the data and setting up the system

mkfs.ext4 /dev/md2 ; mkfs.xfs /dev/md3 ; mkfs.xfs /dev/md4 # create filesystems
mkdir -p /mnt/newroot/{home,var} # create the folder struct (home and var are actually the md3 and md4 so prepare the folders for them
mount /dev/md2 /mnt/newroot
mount /dev/md3 /mnt/newroot/var
mount /dev/md4 /mnt/newroot/home

Now that we are ready we will use rsync to transfer living system and data (WARNING: shutdown everything that temper with data (like ftp/svn/git services). Only thing we are going to loose is few lines of syslog and other log services.

rsync -av /home /mnt/newroot/home # no -z as we don't need to compress
rsync -av /var /mnt/newroot/var
rsync -av / --exclude '/home' --exclude '/dev' --exclude '/lost+found' --exclude '/proc' --exclude '/sys' --exclude '/var' --exclude '/mnt' --exclude '/media' --exclude '/tmp' /mnt/newroot/ # copy all relevant stuff to newroot
mkdir -p /mnt/newroot/{dev,proc,sys,mnt,media,tmp}

After the transfer you need to edit /etc/fstab to reflect new disk layout. Update kernel (if needed to support new RAID layout) and update /etc/defaults/grub if you did RAID like me to contain domdadm line for default command.

Preparing new boot over UEFI

On your machine you need to create usb dongle which supports UEFI boot (you need to be uefi booted to setup UEFI [fcking hilarious]).

We need to download latest archboot iso 64bit (gentoo minimal didn’t contain this lovely feature).
Grab some usb disk and plug it into our machine. We will format it to 32b fat: mkfs.vfat -F32 /dev/[myusb] , mount somewhere and copy the ISO image content to the usb folder (you can enter it in mc and just F5 it if you are lazy like me, but it is working with tar, p7zip or whatever else). Shutdown the computer, unplug old disks and with manic laughter turn the machine again on.

To boot the uefi just open boot list menu and select the disk which has UEFI around its name. It will open grub2 menu where you just select first option. We should be then welcomed by lovely arch installer. Not caring about it switch to another console and open terminal. Setup again the arrays using mdadm –assemble.

for i in {2..4}; do mknod /dev/md${i} b 9 ${i}; mdadm --assemble /dev/md$i /dev/sda${i} /dev/sdb${i}; done

Then just proceed with mounting them somewhere to /mnt and chroot like you would do new gentoo install. Exact steps:

modprobe efivars # load the efi tool variables
mkdir -p /mnt/newroot/{home,var} # create the folder struct (home and var are actually the md3 and md4 so prepare the folders for them
mount /dev/md2 /mnt/newroot
mount /dev/md3 /mnt/newroot/var
mount /dev/md4 /mnt/newroot/home
mount -o rbind /dev /mnt/newroot/dev
mount -o rbind /sys /mnt/newroot/sys
mount -t proc none /mnt/newroot/proc
chroot /mnt/newroot /bin/bash
. /etc/profile

Now that we are in chroot we just install grub2 with GRUB_PLATFORMS=”efi-64″. After that we proceed easily by following wiki article.

Unmount the disk, reboot the system, unplug the flasdrive, …, profit?

AOO and Libreoffice standing next to each other

Well not by its intentions and goals as the situation is still not perfect in cooperation between these nice projects but as applications on our beloved Gentoo.

Today I wasted bit of my time to write wrappers in openoffice-bin package and it can be installed next to libreoffice or libreoffice-bin.

Insane how few bash lines can solve stuff :-)

	# remove soffice bin
	rm -rf "${ED}${EPREFIX}/usr/bin/soffice"

	# replace all symlinks by bash shell code in order to nicely cope with
	# libreoffice
	cd "${ED}${EPREFIX}/usr/bin/"
	for i in oo*; do
		[[ ${i} == ooffice ]] && continue

		rm ${i}
		cat >> ${i} << EOF
#!/usr/bin/env bash
pushd "${EPREFIX}/usr/lib64/openoffice/program" > /dev/null
popd > /dev/null
		chmod +x ${i}

The portage can’t handle the blockers without revbumps/rebuilds so I updated it in live/branch ebuild and with next releases (3.5 next week, 3.6 2 weeks) there won’t be any collisions and you can enjoy comparing these two suites against each other. For binary I was just too lazy so just reemerge if you want to enjoy this.

Note: the plugin install and handling is still not fully tested in situations when you have both implementations around, but the eclass was writen with it on my mind so just try it and report bugs if it does not work. Altho there is one case I didn’t test at all -> What happens when one removes one the implementations and try to reinstall the extension. It should properly register itself under the only remaining one, but still the files will be kept in /usr/lib64/IMPLEMENTATION/…/extensions/install/ and registred in user config dir. Maybe we could run this deregister on package uninstall (portage can detect those)…

Picture to replace last paragraph and to show up how nicely it works:
lo and aoo together