User Tools

Site Tools



In this little article I'd like to describe how to do a complete new installation of a recent linux distro to a remote root-server where only ssh access is possible.

The most important requirement is to have something like a rescue system which can be switched to via webinterface and boots into memory so that we have full access to the harddrive.

Author/Date: Marc Schiffbauer / 2007/03/01


  • partition harddisks as needed
  • Software-RAID-1 (we have two SATA disks)
  • LVM2
  • use filesystems as needed
  • clean and fresh installation of Debian (Etch will be released soon, hopefully)


start rescue system

First, boot to the x86_64 rescue system and login as root. You now should have full access to the two SATA harddrives (/dev/sdab).

Partition disks

Now use fdisk to partition the first disk to only contain one big primary partition of type fd (Linux raid autodetect)

rescue:~# fdisk /dev/sda
rescue:~# fdisk -l /dev/sda

Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1 1 38913 312568641 fd Linux raid autodetect rescue:~# </code> Now clone partition table to second disk:

 rescue:~# sfdisk -d /dev/sda | sfdisk /dev/sdb

RAID setup

Create the disk mirror (RAID-1):

rescue:~# mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1
mdadm: array /dev/md0 started.

Now the array should be in (re-)build process:

rescue:~# cat /proc/mdstat
Personalities : [[raid1]]
md0 : active raid1 sdb1[[1]] sda1[[0]]
      312568576 blocks [[2_2]] [UU]
      [[>....................]]  resync =  1.0% (3150784/312568576) finish=63.0min speed=81756K/sec

unused devices: <none> rescue:~# </code>

LVM setup

We already can work with the fresh disk array, so we now put lvm onto it:

Create physical volume:

 rescue:~# pvcreate /dev/md0
   Physical volume "/dev/md0" successfully created

Create volume group:

 rescue:~# vgcreate -s 64M vg00 /dev/md0
   /etc/lvm/backup: fsync failed: Invalid argument
   Volume group "vg00" successfully created

Create logical volumes (virtual partitions):

(output is “beautified”, each “lvcreate” blamed twice: /etc/lvm/backup: fsync failed: Invalid argument)

rescue:~# lvcreate -n boot -L 100M vg00
  Rounding up size to full physical extent 128.00 MB
  Logical volume "boot" created

rescue:~# lvcreate -n root -L 3G vg00

  Logical volume "root" created

rescue:~# lvcreate -n usr -L 3G vg00

  Logical volume "usr" created

rescue:~# lvcreate -n var -L 5G vg00

  Logical volume "var" created

rescue:~# lvcreate -n home -L 10G vg00

  Logical volume "home" created

rescue:~# lvcreate -n srv -L 200G vg00

  Logical volume "srv" created

rescue:~# lvcreate -n swap -L 2G vg00

  Logical volume "swap" created

rescue:~# lvcreate -n tmp -L 6G vg00

  Logical volume "tmp" created

rescue:~# </code>

creating Filesystems and Swap

Format swap space:

 rescue:~# mkswap /dev/vg00/swap
 Setting up swapspace version 1, size = 2147479 kB
 no label, UUID=26ea9057-c060-4a6f-b8e3-ee5231359326

I use ext3 for /boot and XFS for the rest:

rescue:~# mke2fs -j /dev/vg00/boot
rescue:~# mkfs.xfs /dev/vg00/root
rescue:~# mkfs.xfs /dev/vg00/usr
rescue:~# mkfs.xfs /dev/vg00/var
rescue:~# mkfs.xfs /dev/vg00/srv
rescue:~# mkfs.xfs /dev/vg00/home
rescue:~# mkfs.xfs /dev/vg00/tmp

install new System

mount filesystems

create a root mountpoint and mount the new root to it:

 rescue:~# mkdir /newsys
 rescue:~# mount /dev/vg00/root /newsys/

create all other mountpoints under new root and mount all remaining new filesystems:

 rescue:~# for d in boot usr var srv home tmp; do
 > mkdir /newsys/$d
 > mount /dev/vg00/$d /newsys/$d
 > done

install base system

we use the 'debootstrap' tool to install the base system

 debootstrap --arch amd64 etch /newsys

This did not work out of the box, because the rescue system was sarge based, so this command missed an important file:

 E: No such script: /usr/lib/debootstrap/scripts/etch

so we need a newer debootstrap package:

 rescue:~# wget

install it:

 rescue:~# dpkg -i debootstrap_0.3.3.2_all.deb

and again:

rescue:~# debootstrap --arch amd64 etch /newsys http:<nowiki>//</nowiki>
I: Retrieving Release
I: Retrieving Packages
I: Configuring apt-utils...
I: Configuring klogd...
I: Configuring tasksel-data...
I: Configuring sysklogd...
I: Configuring tasksel...
I: Base system installed successfully.

change root

go into the new root:

 rescue:~# chroot /newsys

create fstab

 rescue:~# vi /etc/fstab


  -  <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/vg00/root  /               xfs     defaults        0       1
/dev/vg00/swap  none            swap    defaults        0       0
/dev/vg00/boot  /boot           ext3    defaults        0       2
/dev/vg00/usr   /usr            xfs     defaults        0       2
/dev/vg00/var   /var            xfs     defaults        0       2
/dev/vg00/srv   /srv            xfs     defaults        0       2
/dev/vg00/home  /home           xfs     defaults        0       2
/dev/vg00/tmp   /tmp            xfs     defaults        0       2
proc            /proc           proc    defaults,noauto 0       0

update package lists

This is needed before installing the first packages so that aptitude will not complain about “untrusted packages”.

 rescue:~# aptitude update

install kernel

 rescue:~# aptitude install linux-image-2.6-amd64

Answer “yes”, then “no”.

create raid config

 rescue:~# aptitude install mdadm

(There will be some warnings and an error here, ignore them this time)

 rescue:~# vi /etc/mdadm/mdadm.conf

add an ARRAY line for our raid:

 ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1

do not touch any of the lines already in that file

install bootloader

lilo can handle root raid!

exit the chroot:

 rescue:~# exit

bindmount-dev so that we have lvm devices in chroot:

 rescue:~# mount /dev /newsys/dev -o bind

goto chroot again:

 rescue:~# chroot /newsys/

mount proc and create proper /etc/mtab

rescue:~# mount /proc
rescue:~# rm /etc/mtab
rescue:~# ln -s /proc/mounts /etc/mtab

install lilo and lvm2

 rescue:~# aptitude install lilo lvm2

configure lilo

 rescue:~# vi /etc/lilo.conf
vga = normal    # force sane state


  -  End LILO global Section

image = /vmlinuz
  label = linux
  initrd = /initrd.img
  root = /dev/vg00/root


modify initrd for first boot

we need a little hack to make the first boot from disk succeed

(Note: this may be due to the rescue system having other major numbers for LVM than our final system. I read something about such a case, but did not check if it applies to this case, too

 rescue:~# vi /etc/initramfs-tools/scripts/local-premount/tmp_hack
  - !/bin/sh


        echo "$PREREQ"


case $1 in

  1. get pre-requisites


        exit 0


modprobe -q xfs mount /dev/vg00/root /root

exit 0

rescue:~# chmod 0755 /etc/initramfs-tools/scripts/local-premount/tmp_hack rescue:~# update-initramfs -u</code>

configure network

add network configuration

 rescue:~# vi /etc/network/interfaces
  -  Loopback device:
auto lo
iface lo inet loopback
  -  device: eth0
auto eth0
iface eth0 inet static
  -  default route to access subnet:
up route add -net netmask gw eth0

set root password

rescue:~# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

install ssh

install ssh daemon so we are able to login:

 rescue:~# aptitude install openssh-server

configure keyboard layout

 rescue:~# aptitude install console-data console-common console-tools

(Accept defaults everywhere)

To choose german keyboard layout:

 rescue:~# dpkg-reconfigure console-data
  • Select keymap from arch list
  • qwertz
  • German
  • Standard
  • latin1

configure locales

 rescue:~# aptitude install locales


 rescue:~# dpkg-reconfigure locales

I chose:

 [[*]] de_DE ISO-8859-1
 [[*]] de_DE.UTF-8 UTF-8
 [[*]] de_DE@euro ISO-8859-15
 [[*]] en_US ISO-8859-1
 [[*]] en_US.ISO-8859-15 ISO-8859-15
 [[*]] en_US.UTF-8 UTF-8


and “en_US.UTF-8” as the default.

configure hostname

Put the hostname (withoud domain) into /etc/hostname

 rescue:~# vi /etc/hostname

configure timezone

Set timezone using tzconfig

 rescue:~# tzconfig

Answer “y”. For MET timezone in Germany choose 8, then enter Berlin

create /etc/hosts

 vesta:~# vi /etc/hosts       localhost

edit/create /etc/resolv.conf

edit the resolv.conf so that it contains your domain name and DNS server ip addresses

 vesta:~# vi /etc/resolv.conf
search <your domain>
domain <your domain>


Now reboot the system, it should come up and you should be able to login. Remeber to unmount all filesystems… then:

  rescue:~# reboot

Some cleanups

Now that the new system has bootet from the local disk successfully we cleanup the initrd (remove the hack again). But before that we save that special initrd and add a fallback config to lilo.conf for it. If we ever need to call lilo from the rescue system we can use that “fallback” config to boot the system after that.

so, extend lilo.conf by this:

  -  fallback kernel, use it if you called lilo from rescue system
image = /boot/vmlinuz-2.6.18-4-amd64-fallback
  label = fallback
  initrd = /boot/initrd.img-2.6.18-4-amd64-fallback
  root = /dev/vg00/root


copy kernel and initrd to the right place:

 rescue:~# cp /boot/vmlinuz-2.6.18-4-amd64 /boot/vmlinuz-2.6.18-4-amd64-fallback
 rescue:~# cp /boot/initrd.img-2.6.18-4-amd64 /boot/initrd.img-2.6.18-4-amd64-fallback
 rescue:~# lilo

Remove the hack from the default initrd:

 rescue:~# rm /etc/initramfs-tools/scripts/local-premount/root_hack
 rescue:~# update-initramfs -u

Now if you want lilo to choose the fallback config once (while in rescue system), simply call

 rescue:~# lilo -R fallback

Final tuning

Adjust RAID syncronisation speed

(this applies to slow systems only, on an AMD64 Dualcore with 2x 320GB-SATA disks this was a non-issue)

As a default your software raid is syncronising wiht the maximum speed possible. If you have one day the unlucky case that the mirror got broken and you re-activate the deactivated piece, the syncronisation degrades your system performance quite heavily. To avoid this, you shoud set a reasonable maximum value.

As a reminder, we look back to the initial sync:

rescue:~# cat /proc/mdstat
Personalities : [[raid1]]
md0 : active raid1 sdb1[[1]] sda1[[0]]
      312568576 blocks [[2_2]] [UU]
      [[>....................]]  resync =  1.0% (3150784/312568576) finish=63.0min speed=81756K/sec

unused devices: <none> rescue:~# </code>

The sync speed is more than 80 MByte/sec here - that's good!

The speed settings for software raid rebuild can be controlled through two entries in /proc (here we see the default values in K/sec units):

rescue:~# cat /proc/sys/dev/raid/speed_limit_max
rescue:~# cat /proc/sys/dev/raid/speed_limit_min

The maximum value should be set to something between 1/4 and 1/2 of the maximum speed. The best way to set a new value for /proc entries is adding it to /etc/sysctl.conf, the command “sysctl -p” is automatically run on each reboot:

rescue:~# echo " = 40000" >>/etc/sysctl.conf 
rescue:~# sysctl -p = 40000

mschiff 19:29, 2 Mar 2007 (CET)

linuxtips/linuxonrootserver.txt · Last modified: 2012/01/14 05:14 by mschiff