AIR Wiki : ChangeLogVanGogh2004

HomePage :: Categories :: PageIndex :: RecentChanges :: RecentlyCommented :: Login/Register

Changelog VanGogh Cluster 2004


Removed "ntp[1-2].sara.nl" from the NTP server setup.

2004-12-20 by Hans Blom
Why: Both NTP server are unavailable for a long period.
Where: Node "vangogh0".
How:
Removed "145.100.5.30" and "145.100.5.47" from /etc/ntp/step-tickers and commented them out in /etc/ntp.conf.

Set output of the uname command in the OS_NAME environment variable to replace the ARCH environment variable in the sequel.

2004-12-03 by Hans Blom
Why: Some Linux distributions are already using ARCH for the machine name
How:
Edit /etc/csh.cshrc and /etc/bashrc.user correspondingly. Temporarily ARCH will be set equal to OS_NAME.

Install the Network Diagnostic Tool (NDT).

2004-11-15 by Hans Blom
Why: Analyse network connections for Web100 kernels.
Where: Nodes "vangogh[0-8]" "wgsara2".
How:
Use the required library from the Web100 userland utilities that are installed in /usr/local/Web100_Userland. The following commands have been run to configure, make and install the tools:

cd Space/Install
tar xfvz ~blom/Install/mod_ndt-3.0.23.tar.gz
cd mod_ndt-3.0.23
setenv CFLAGS \
    '-g -O2 -I/usr/local/Web100_Userland/include/web100'
setenv LDFLAGS \
    '-g -O2 -L/usr/local/Web100_Userland/lib'
setenv LDFLAGS \
    "$LDFLAGS -Wl,-rpath,/usr/local/Web100_Userland/lib"
./configure --prefix=/usr/local/NDT
make

su

cd ~blom/Space/Install/mod_ndt-3.0.23
make install
conf/create-html.sh

Please note that the loader option -Wl,-rpath,... implies that the Web100 Userland lib directory will be add to the shared objects search path of the execuatble.

Also the man configuration files have been adjusted. The Userland and NDT utilities will only be add to the path when a Web100 kernel has been used.

Added routes for SC2004 to, cavewave, EVL hosts, and demo hosts.

2004-11-08 by Paola Grosso, Freek Dijkstra, Hans Blom
Why: Route these subnets via the demo interface.
Where: Nodes "vangogh[1-2]", "vangogh[4-8]".
How:
Add routes with:
route add -net <net> netmask <netmask> gw 192.168.84.1
...
To add the routes also after a reboot add to /usr/local/etc/sysconfig/network-scripts/route-eth2:
-net <net> netmask <netmask> gw 192.168.84.1
...
See also the add/delete routes item at 2004-05-25.

Intstall Time::HiRes Perl module.

2004-11-02 by Hans Blom
Why: To get support for time functions like gettimeofday().
Where: Nodes "vangogh[0-8]".
How:
The RPM has been installed using the command:
rpm -Uvh [--dbpath ~root/rpm_db_vangogh0/rpm] \
    perl-Time-HiRes-1.38-3.i386.rpm
(--dbpath option not on "vangogh0").

Configure an alias of eth2 in VLAN 7 also started at system boot.

2004-11-01 by Hans Blom
Why: To be used at SC 2004.
Where: Nodes "vangogh[0-8]".
How:
The following steps have been executed:

Install Mozilla 1.7 and JDK 1.5.0 and set Java plogin for Mozilla.

2004-10-05 by Hans Blom
Why: Te be able to brose HTML pages containing Applets.
Where: Nodes "vangogh[0-8]" and "wgsara[1-2]".
How:
The following steps have been executed:

Change the Cricket configuration such that "vangogh0" will also sample the Force10 switch.

2004-09-23 by Hans Blom
Why:
The Force10 switch has been moved from NIKHEF to SARA and should therefore directly be sampled by "vangogh0".
Where: Node "vangogh0".
How:
Previously the Cricket configuration was a mirror from the configuration of "fs2" and was only used to present the data via the Web. For that reason the RDD data files were copied from "fs2" to "vangogh0" using rsync. The following steps were taken to change this functionality:

Install SNMP utilities.

2004-09-23 by Hans Blom
Why:
As "vangogh0" will also be used now to sample Cricket data there is need for manual SNMP sample tools when diagnostics are needed.
Where: All "VanGogh" nodes.
How:
Install with rpm the packages t-snmp-utils-5.0.6-17.i386.rpm, net-snmp-devel-5.0.6-17.i386.rpm, and php-snmp-4.2.2-17.i386.rpm (net-snmp-5.0.6-17.i386.rpm was already installed).

Rename /root/.bash_logout into /root/.bash_logout.orig.

2004-09-15 by Hans Blom
Why:
That resource file contains as only command clear. We will be able to see the concatenated output of root sessions at multiple hosts.
Where: All "VanGogh" nodes.
How: mv -i /root/.bash_logout /root/.bash_logout.orig

Install Globus 3.2

2004-06-18 by Freek Dijkstra
Why: Request of UCSD for a demonstration
Where: All "VanGogh" nodes; vangogh0 is the master node
How: See Globus3Installation.

Change the domain name from the unsupported ".saradomain" to the DNS supported ".uva.netherlight.nl".

2004-06-09 by Hans Blom
Why: Reverse DNS lookup is required by the Globus Toolkit.
Where: Nodes "vangogh0-8" "wgsara1-2".
How:
  1. Place in /etc/hosts for each node <node> "<node>.uva.netherlight.nl" before the existing "<node>.saradomain".
  2. At "vangogh0": change in /etc/exports ".saradomain" into ".uva.netherlight.nl".
  3. At "vangogh0": do the same for /etc/hosts.allow.
  4. Reboot the system.

Configure the 2th Gigabit interface.

2004-06-08 by Hans Blom
Why:
For the puprpose of switching tests in the Gigabit connections with two other "VanGogh" nodes.
Where: Node "vangogh4".
How:
The intension is two use 10.1.1.114 in the 10.1.0.0/24 Calient test subnet for connections with arbitrarily VanGogh nodes and the IP address assigned to the 2th intreface, 10.1.1.122, for the connection with a specific node, 10.1.1.115. To do this the following actions had been taken:
  1. Configure the 2th interface with a /32 subnet such that no subnet routes will be add in /etc/sysconfig/network-scripts/ifcfg-eth3:
    DEVICE=eth3
    ONBOOT=yes
    BOOTPROTO=none
    IPADDR=10.1.1.122
    NETMASK=255.255.255.255
    TYPE=Ethernet
    USERCTL=no
    PEERDNS=no

  2. Add a host route to the specific test host "10.1.1.115" by adding the following line to /usr/local/etc/sysconfig/network-scripts/route-eth3:
    -host 10.1.1.115 dev eth3

  3. Run ifup eth3 to configure the host and add the host route for which /sbin/ifup-local will be called. With these configuration files the 2th intreface will also be configured at system boot.

Erase lilo RPM package.

2004-05-27 by Hans Blom
Why:
The boot loader grub has been used so lilo is not needed and therefore it appearance might create confusion
Where: Nodes "vangogh0-8".
How:
  1. Erase RPM:
    • "vangogh0":
      rpm -ev lilo
    • "vangogh1-8":
      rpm -ev --dbpath ~root/rpm_db_vangogh0/rpm lilo
  2. rm /etc/lilo.conf (Config file was not removed by rpm -e)

Install a man page that describes the command required to bring up and down the Netherlight interfaces.

2004-05-25 by Hans Blom
Why:
To list for authorised users the commands to bring up and down the interface aliases related with Netherlight.
Where: Nodes "vangogh1-8".
How:
Unpack NL_VLAN_Man.tar.gz, go to the installation root directory and run as root "make install". This installs a man page nl_vlan.7 describing the required commands.

Install ifup_user and ifdown_user scripts.

2004-05-25 by Hans Blom
Why:
To let non-root, authorised users bring up and down interfaces. This is especially intended to bring up and down the Netherlight interface aliases (see two items below).
Where: Nodes "vangogh1-8".
How:
Unpack IFUpDown_User.tar.gz, go to the installation root directory and run as root make install. Please note that in these scripts the regular RedHat configuration files will be used, so configuration of the interfaces and static routes will not be affected.

Install (new) versions of ifconfig_user and route_user.

2004-05-25 by Hans Blom
Why: To let non-root, authorised users manage interfaces and static routes.
Where: Nodes "vangogh0-8".
How:
Unpack IFConfig_Route_User.tar.gz" go to the installation root directory and run make and as root make install.

Install config. files to bring up/down the via the Calient to Netherlight connected interface aliases during run time.

2004-05-25 by Hans Blom
Why:
To be able to configure the interfaces dynamically in the subnets related with the Netherlight VLAN 7 & 10.
Where: Nodes "vangogh1-8".
How:
Installing in the dir. /etc/sysconfig/network-scripts files ifcfg-eth2-v{7,10}. This implies that first the config. files ifcfg-eth2 are read which can than be overruled by these files. The solution to create directly interface alias files ifcfg-eth2:{1,2} did not seem to work because it did not allow to bring up (at boot) only ifcfg-eth2 without the interface aliases. The content for "vangogh5" is for instance:
ifcfg-eth2-v7:
DEVICE=eth2:1
ONBOOT=no
IPADDR=145.146.100.39
NETMASK=255.255.255.192
NETWORK=145.146.100.0
BROADCAST=145.146.100.63

ifcfg-eth2-v10:
DEVICE=eth2:2
ONBOOT=no
IPADDR=145.146.100.231
NETMASK=255.255.255.192
NETWORK=145.146.100.192
BROADCAST=145.146.100.255

Add scripts to add/delete static (Netherlight) routes.

2004-05-25 by Hans Blom
Why:
To be able to route the Netherlight VLAN 7 & 10 related subnets via the interface connected with Netherlight.
Where: Nodes "vangogh1-8"
How:
  1. Install one or two config. files route-eth2:{1,2} in dir. /usr/local/etc/sysconfig/network-scripts containing the route arguments that should be specified after the command route {add,del}. Their content is:
    • route-eth2:1:
      -net 145.146.100.192 netmask 255.255.255.192 dev eth2
    • route-eth2:2:
      -net 145.146.100.0 netmask 255.255.255.192 dev eth2
  2. Install scripts /sbin/if{up,down}-local (ifdown-local is a symbolic link to ifup-local) that will be called by the default system /sbin/if{up,down} scripts with the interface (alias) (here eth2:{1,2}) as only argument to add or delete these routes, using the config. files mentioned under 1. Please note that the default RedHat procedure by installing config. files in /etc/sysconfig/network-scripts/route-<if_name> is not sufficient for us, because the routes are not deleted when the interface is brought down.

Install config. files to configure the interface connected with the Calient at boot time.

2004-05-24 by Freek Dijstra
Why: To have the Calient connction correctly configured at system boot.
Where: Nodes "vangogh0-8".
How:
  1. Edit config. files /etc/sysconfig/network-scripts/ifcfg-eth2. The content for "vangogh5" is for instance:
    DEVICE=eth2
    ONBOOT=yes
    BOOTPROTO=none
    IPADDR=10.1.1.115
    NETMASK=255.255.0.0
    TYPE=Ethernet
    USERCTL=no
    PEERDNS=no
    NETWORK=10.1.0.0
    BROADCAST=10.1.255.255

  2. Run ifup eth2 to bring the interface up.

Build and install Web100 kernel.

2004-05-11 by Freek Dijkstra
Why: To be able to use the Web100 network features.
Where: Nodes "vangogh0-8".
How: See "Web100KernelInstallation".

Create "web100" group and add the appropriate group members.

2004-05-10 by Hans Blom
Why: To be able to read "web100" protected kernel information below /proc/web100.
How:
  1. Run groupadd -g 11000 to create the group.
  2. Run vigr to edit /etc/group for adding the members.

Create directory: /var/cache/man.

2004-05-06 by Hans Blom
Why: Otherwise the whatis database could not be created by the corresponding cron script.
Where: Nodes "vangogh1-3", "vangogh8" (others were already changed).
How: Run mkdir /var/cache/man.

Install man page describing the availability of the "talk" service at the "VanGogh".

2004-05-06 by Hans Blom
Why: To inform the users in what way "talk" is available at the cluster
Where: Nodes "vangogh1-3", "vangogh8", "wgsara1-2" (others were already changed).
How: See corresponding changelog text at 2004-03-26.

Automount the user home directories now from disk 2.

2004-05-04 by Hans Blom
Why: The user home directories have been copied at server "vangogh0" to disk 2.
Where: Nodes "vangogh1-3", "vangogh8" (others were already changed).
How: See corresponding changelog text at 2004-04-1

Enabled Trivial FTP for the Calient switch.

2004-04-23 by Hans Blom
Why: To enable the transfer of files from the Calient switch.
Where: Node "vangogh0".
How:
The in.tftpd server had already been enabled in xinetd. Only the Calient had to be allowed in the TCP wrapper for in.tftpd by adding the following line to /etc/hosts.allow:
in.tftpd:  beautycees.saradomain

Automount the user home directories now from disk 2.

2004-04-13 by Hans Blom
Why: The user home directories have been copied at server "vangogh0" to disk 2.
Where: Nodes "vangogh4-7" (others were currently not available).
How:
  1. Change in /etc/amd.home the phrase rfs:=/home into rfs:=/disk2/home.
  2. Reload the automounter configuration using the command line: service amd reload.

Mount the second disk of and use it for the (NFS mounted) home directories.

2004-04-13 by Hans Blom
Why:
To make the disk usage from more in balance with those from the nodes without NFS mounted home directories.
Where: Node "vangogh0".
How:
  1. Mount disk 2 to at the mount point /disk2 using the device /dev/sdb2 by adding the following line to /etc/fstab:
    /dev/sdb2   /disk2   ext3   defaults   1 2
  2. Reboot the system.
  3. Copy the (NFS mounted) user directories <dir> to disk 2 with the following command lines:
    cd /home
    tar cf - <dir> | ( cd /disk2/home && tar xfv - )
  4. Adjust the appropriate user home directories /home/... into /disk2/home/... by running vipw to edit /etc/passwd.
  5. Export the disk 2 NFS mounted homedirectories by changing in /etc/export the line:
    /home/nfs  *.saradomain(rw)
    into:
    /disk2/home/nfs  *.saradomain(rw)
  6. Reboot the system.

Create directory: /var/cache/man.

2004-03-26 by Hans Blom
Why: Otherwise the whatis database could not be created by the corresponding cron script.
Where: Nodes "vangogh0", "vangogh4-7" (others where currently not available)
How: Run mkdir /var/cache/man.

Install man page describing the availability of the "talk" service at the "VanGogh".

2004-03-26 by Hans Blom
Why: To inform the users in what way "talk" is available at the cluster.
Where: Nodes "vangogh0", "vangogh4-7" (others where currently not available)
How:
Edit the directives in the Makefile and run make install from the the Talk_VanGogh installation directory.

Enable the "talk" service. Limit requests to the current host.

2004-03-24 by Hans Blom
Why: To allow users to communicate more directly with each other.
Where: Node "vangogh0".
How:
  1. Enable the "talk" and "ntalk" service (for some reason both are required) by changing in /etc/xinetd.d/talk and /etc/xinetd.d/ntalk the line
    disable = yes
    into
    disable = no
  2. Allow access from the local host by adding the following line to /etc/hosts.allow:
    in.ntalkd: vangogh0.saradomain
  3. Restart xinetd using the command: service xinetd restart.

Modified the MPI machinefile such that it contains the current cluster nodes.

2004-02-10 by Hans Blom
Why: To be able to run MPI programs at the cluster.
Where: All "VanGogh" nodes.
How:
Put all "VanGogh" hostnames (as listed by the output of the hostname program), separated by newlines, to the file /usr/local/MPI/share/machines.LINUX.

Updated the Web directory tree to the used monitors and available documentation.

2004-02-06 by Hans Blom
Why: To make the content visible from the Web:
Where: Node "vangogh0".
How:
Edit the HTML files below the HTML root /var/www/html and create required sub directories and symbolic links. Change /etc/httpd/conf/httpd.conf that it is allowed to follow symbolic links below /var/www/html.

Installed J2SE V. 1.4.2 SDK and corresponding documentation.

2004-02-06 by Hans Blom
Why: Java not yet installed and RPM versions are often too old.
Where: All "VanGogh" nodes.
How:
  1. Run the self extracting script j2sdk-1_4_2_03-linux-i586.bin in usr/local.
  2. Extract the ZIP compressed documentation j2sdk-1_4_2-doc.zip in /usr/local/j2sdk1.4.2_03.
  3. Define in the resource files /etc/csh.cshrc and /etc/bashrc.user the environment variable JAVA_HOME and add the Java bin directory to the path.
  4. Add the corresponding man path mapping to /etc/man.config.

Configured the NTP servers at the client node to query the NTP server at server node "vangogh0".

2004-02-03 by Hans Blom
Why:
make runs between multiple NFS mounted filesystems need more or less synchronised clocks.
Where: Nodes "vangogh[1-8]".
How:
  1. Put "195.169.124.34" in /etc/ntp/step-tickers.
  2. Add the following lines to /etc/ntp.conf:
    #
    # Restrict access to the NTP cluster server: it need not to
    # query this server.
    #
    restrict 195.169.124.34 nomodify notrap noquery
    #
    # Specify the NTP cluster server to contact.
    #
    server 195.169.124.34

Configured the NTP server that is also used for time queries from the cluster nodes.

2004-02-03 by Hans Blom
Why:
make runs between multiple NFS mounted filesystems need more or less synchronised clocks.
Where: Node "vangogh0".
How:
  1. Put the following timeservers line-separated in /etc/ntp/step-tickers:
    145.100.5.30
    145.100.5.47
    145.100.3.18
    146.50.4.20
    131.211.32.72
  2. Add the following lines to /etc/ntp.conf:
    #
    # Restrict access to the higher stratum NTP servers that
    # should be queried: they need not to query this server.
    #
    restrict 145.100.5.30 nomodify notrap noquery
    restrict 145.100.5.47 nomodify notrap noquery
    restrict 145.100.3.18 nomodify notrap noquery
    restrict 146.50.4.20 nomodify notrap noquery
    restrict 131.211.32.72 nomodify notrap noquery
    #
    # Specify the higher stratum NTP servers to contact.
    #
    server 145.100.5.30    # ntp1.sara.nl
    server 145.100.5.47    # ntp2.sara.nl
    server 145.100.3.18    # ntp3.sara.nl
    server 146.50.4.20     # init.science.uva.nl
    server 131.211.32.72   # ntp0.phys.uu.nl
    #
    # Undisciplined Local Clock. This is a fake driver intended
    # for backup and when no outside source of synchronised time
    # is available. The default stratum is usually 3 but a low
    # value is used here such that this driver is never used for
    # synchronisation unless no other is available.
    #
    server 127.127.1.0             # Local clock.
    fudge  127.127.1.0 stratum 10
    #
    # Restrict access to the lower stratum NTP servers from the
    # cluster subnet that
    # should query this server.
    #
    restrict 195.169.124.32 mask 255.255.255.224 nomodify notrap

Created "smokping" users.

2004-01-30 by Hans Blom
Why: To be able to run remote fping commands by the "SmokePing" user.
Where: Nodes "vangogh[1-8]".
How: Create/duplicate the users with the add_cluster_user script.

Installed the SmokePing latency monitor package.

2004-01-30 by Hans Blom
Why:
To be able to monitor RTT between hosts to see in one view latency, perc. lost and RTT distribution
Where: Node "vangogh0".
How:
  1. Create a "smokping" user under which account "SmokePing" will be installed.
  2. Unpack the distribution in the home directory of the "SmokePing" user.
  3. Install "SmokePing" according to the instructions that are listed in smokeping-1.25/doc/smokeping_install.txt.
  4. Install our "SshFPing" probe. by copying SshFping.pm contained in SmokePing_Probe.tar.gz to smokeping-1.25/lib/probes.
  5. Edit the "SmokePing" configuration in smokeping-1.25/etc/config.
  6. Adjusted the HTTPD configuration file /etc/httpd/conf/httpd.conf such that in the directory /var/www/html/smokeping CGI-Bin scripts are allowed and symbolic links will be followed.
  7. Change the owner and group of the image cache directory /var/www/html/.smokeping_image_cache to the CGI-Bin user (apache:apache) such that the CGI-Bin script could create the required directories and image files.
  8. Install the "smokping" "Lib" files by unpacking SmokePing_Lib.tar.gz in ~smokping.
  9. Run crontab ~smokping/Lib/Crontab/crontab_input to check every 15 minutes if the smokeping monitor script is running.

Installed the Perl RPM packages that are required for Cricket.

2004-01-30: by Hans Blom
Why: To keep the distribution at the nodes consistent.
Where: Nodes "vangogh[1-8]".
How:
Installed the Perl RPM packages that are required for Cricket, using the database that has been copied from "vangogh0":
rpm -Uvh --dbpath ~root/rpm_db_vangogh0/rpm \
    perl-CGI-2.81-88.i386.rpm
rpm -Uvh --dbpath ~root/rpm_db_vangogh0/rpm \
    perl-DB_File-1.804-88.3.i386.rpm
rpm -Uvh --dbpath ~root/rpm_db_vangogh0/rpm \
    perl-TimeDate-1.1301-5.noarch.rpm

Copied missing RPM database directory.

2004-01-30 by Hans Blom
Why: To be able to install RPM packages in a proper way.
Where: Nodes "vangogh[1-8]".
How:
Copied the valid RPM package database from the directory vangogh0:/var/lib/rpm to the non-standard directory ~root/rpm_db_vangogh0/rpm (option --dbpath) at the other nodes and using that database directory for the time being to install packages. A better solution have to be found in the sequel.

Make a backup of Cricket configuration and RDD files at a daily basis.

2004-01-29 by Hans Blom
Why: To keep during a year also the daily Cricket information available.
Where: Node "vangogh0".
How:
Load as the "cricket" user the crontab file in ~cricket/Crontab/crontab_input. In that file the ~cricket/Bin/backup_cricket_config_data script will be called at a daily basis to make tar-gzip archives of the Cricket configuration and RDD files in the directories ~cricket/Cricket_Daily/<mm>_<dd>.

Installed Speedy-CGI and let the Cricket CGI scripts use it.

2004-01-29 by Hans Blom
Why:
Cricket uses a CGI-Bin call per displayed image. This would make the displaying of Web pages with multiple images slow. Speedy-CGI makes the CGI-Bin calls resistant and therefore speeds the displaying and lowers the host load.
Where: Node "vangogh0".
How:
  1. Install Speedy-CGI in /usr/local.
  2. Changes the first lines of the scripts ~cricket/cricket-1.0.4/grapher.cgi and ~cricket/cricket-1.0.4/mini-graph.cgi:
    #!/usr/bin/perl -w
    # -*- perl -*-##
     
    into:
    #! /usr/local/bin/speedy -w
    # -*- perl -*-
    #!/usr/bin/perl -w
     

Installed Cricket V. 1.0.4.

2004-01-28 by Hans Blom
Why: To display the SNMP counters of the DAS-2 Force10 switch.
Where: Node "vangogh0".
How:
  1. Installed the Perl RPM packages that are required forCricket:
    rpm -Uvh perl-CGI-2.81-88.i386.rpm
    rpm -Uvh perl-DB_File-1.804-88.3.i386.rpm
    rpm -Uvh perl-TimeDate-1.1301-5.noarch.rpm
  2. Created a local user "cricket" where Cricket will be installed and configured for the Force10.
  3. Unpacked the "Cricket" distribution in the home directory of the "cricket" user and followed the instructions described in ~cricket/cricket-1.0.4/doc/beginner.html.
  4. Configured the Force10 setup in cricket-config/f10 and compiled the new setup with ~cricket/cricket-1.0.4/compile.
  5. Installed the SSH key from user "jblom" at "fs2.das2.nikhef.nl" at the "cricket" user. Because no SNMP queries are allowed from a different domain, the sampling is executed at "fs2" with a comparable configuration and updated RRD files will be copied with rsync to the corresponding data directory below ~cricket/cricket-data.
  6. Created a Cricket result Web directory in /var/www/html/cricket and created in that directory the appropriate symbolic links needed for the Web presentation of the results.
  7. Adjusted the HTTPD configuration file /etc/httpd/conf/httpd.conf such that in the directory /var/www/html/cricket CGI-Bin scripts are allowed and symbolic links will be followed.
  8. Install a daily crontab job script in /etc/cron.daily/clean_cricket_cache such that files in the Cricket image cache directory /tmp/cricket-cache that are not modified in one day will be removed.

Changed shell and man resource files.

2004-01-28 by Hans Blom
Why: To be able to access properly the installed software and documentation.
Where: All "VanGogh" nodes.
How:
  1. For (t)csh shells set the executable path and required environment variables in /etc/.cshrc such that all software could also be found for non-interactive user shells. Remove environment variable settings from /etc/csh.login, because they are now already defined in /etc/.cshrc. Keep only typically interactive settings used for login shells.
  2. For bash define typically non-root settings by means of the new /etc/bashrc.user resource file that will be sourced by /etc/bashrc for non-root users.
  3. Added in /etc/man.config lines MANPATH_MAP <bin_dir> <man_dir> for software that had been installed at non-standard locations.

Added slocate directory.

2004-01-28 by Hans Blom
Why: locate did not work
Where: All "VanGogh" nodes.
How:
mkdir /var/lib/slocate
chmod root:slocate /var/lib/slocate

Copy the CVS root directory from "vangogh0" to a cluster node and to an UU host.

2004-01-26 by Hans Blom
Why: To have a backup at disk failure and also a tape backup (UU).
Where: Node "vangogh6" and UU host "fseven.phys.uu.nl".
How:
The script /root/Lib/Bin/rsync_cvs_root copies the CVS root vangogh0:/usr/local/CVS_Root to vangogh6:~backup/Backup/VANGOGH0/CVS_Root and to the UU host using rsync. The crontab entry vangogh0:/etc/cron.d/rsync_cvs_root takes care that the backup will be run each morning.

Copy local user directories from the "wgsara" cluster to the "VanGogh" cluster.

2004-01-25 by Hans Blom
Why: Keep local user storage of the "wgsara" cluster after removing of the "wgsara" cluster.
Where: Nodes "vangogh[1-5]".
How:
  1. Create tar archives of the local usage storage in the dir.'s: wgsara<I>:/space/user/<user>, <I> = 0, ..., 4.
  2. Unpack the archives from node "wgsara<I>" in vangogh(<I>+1):/home/space/<user>. Server "vangogh0" is not used here to save space for the home directories.

Copied the home directories from "wgsara0" to "vangogh0" and created the corresponding user accounts.

2004-01-24 by Hans Blom
Why: Replace "wgsara0" as home server by "vangogh0".
Where: Node "vangogh0".
How:
  1. When there was already a user account at "vangogh0", store it in a sub dir. of the user home. Otherwise create the user accounts with the /usr/local/sbin/add_cluster_user script which provides an interactive interface to useradd and has an option to generate a command line for repetition at the other nodes.
  2. Copy the password entries from wgsara0:/etc/shadow to the cluster using the vipw command.
  3. Copy at "wgsara0" with tar -c ... | tar -x ... the home directories to "vangogh0", using the automounter.
  4. Make also "vangogh0" home directory server for the "wgsara" cluster by changing the user home directories with vipw.
  5. Remove the copied NFS mounted home directories from "wgsara0" such that the user could not get confused between its old and new home directory.

Backup the home directories from server "vangogh0" in two cluster nodes.

2004-01-23 by Hans Blom
Why: To have a backup at disk failures.
Where: Nodes "vangogh0" (src) and "vangogh2", "vangogh3" (dest).
How:
The script vangogh0:/root/Lib/Bin/backup_users makes for each user <user> a tar-gzip archive that will be created via SSH and a pipe at: vangogh[2-3]:~backup/Backup/VANGOGH0/home/<user>.tar.gz (local home directories) and vangogh[2-3]:~backup/Backup/VANGOGH0/home/nfs/<user>.tar.gz (NFS mounted home directories). The backup user "backup" is a local user. The archives of no longer existing home-directories at "vangogh0" will not be automatically removed. The crontab entry vangogh0:/etc/cron.d/backup_users takes care that the backup of the users will be repeated each Sunday morning.

Backup the home directories from server "wgsara0" in two cluster nodes.

2004-01-23 by Hans Blom
Why:
To have a backup at disk failures and in the case the copying of the user home directories from "wgsara0" to "vangogh0" should fail.
Where: Nodes "vangogh4" and "vangogh5".
How:
The script wgsara0:/root/Lib/Bin/backup_users makes for each user <user> a tar-gzip archive that will be created via SSH and a pipe at: vangogh[4-5]:~backup/Backup/WGSARA0/home/<user>.tar.gz (local home directories) and vangogh[4-5]:~backup/Backup/WGSARA0/home/wgsara0/<user>.tar.gz (NFS mounted home directories). The backup user "backup" is a local user. The archives of no longer existing home-directories at "wgsara0" will not be automatically removed. The crontab entry wgsara0:/etc/cron.d/backup_users takes care that the backup of the users will be repeated each Sunday morning.

Installed non-standard software in /usr/local.

2004-01-22 by Hans Blom
Why: Software required to run tests or needed for the ease of use.
Where: All "VanGogh" nodes.
How:
Run configure, make, and make install for the various packages that are installed. The original software distribution archives are stored in wgsara1:/usr/local/src. A.o. the following software (+ documentation) had been installed:

Configured amd to mount the user home directories at the server.

2004-01-21 by Hans Blom
Why: To have only one home directory per user at the cluster.
Where: Nodes "vangogh[1-8]".
How:
  1. Added to /etc/sysconfig/amd the line
    MOUNTPTS='/home/nfs /etc/amd.home'
    such that when /home/nfs/* will be accessed, the remote home root directory tree will be mounted as described in the config. file /etc/amd.home.
  2. Created automount config. file /etc/amd/home such that vangogh0:/home/nfs will be mounted at /.automount/home when /home/nfs/<user> has been accessed. Also a symbolic link /home/nfs/<user> -> /.automount/home/<user> will be created.

Added the management subnet from the "wgsara" and "VanGogh" clusters to the TCP wrapper library for the portmapper and NFS mount daemon.

2004-01-21 by Hans Blom
Why:
To be able to allow NFS mounts from the "wgasara" and "VanGogh" cluster nodes at "vangogh0".
Where: Node "vangogh0".
How:
Added the following lines to /etc/hosts.allow:
portmap:    195.169.124.32/255.255.255.224
rpc.mountd: 195.169.124.32/255.255.255.224

It is required to specify IP addresses for portmap.

Exported the NFS mounted home root /home/nfs to all hosts belonging to ".saradomain".

2004-01-21 by Hans Blom
Why:
To be able to NFS mount the user home directories from the "wgasara" and "VanGogh" cluster nodes at "vangogh0".
Where: Node "vangogh0".
How:Added the line /home/nfs *.saradomain(rw) to /etc/exports.

Limiting the remote access to the "VanGogh" cluster to SSH.

2004-01-21 by Hans Blom
Why*: Enlarging the security level by disabling a.o. telnet.
Where: All "VanGogh" nodes.
How:
Put the only line All: All /etc/hosts.deny. Added the line sshd: All to /etc/hosts.allow.

Defining the management and subnet settings.

2004-01-21 by Hans Blom
Why: To allow proper access from management and test hosts.
Where: All "VanGogh" nodes.
How:
  1. Edit the interface configure files
    /etc/sysconfig/network-scripts/ifcfg-ethi, i = 0, ... .
    Important is that:
    • Only in ifcfg-eth0 GATEWAY=195.169.124.33 should be set; in the other config. files it should be left ndefined. Otherwise routes to bad gateways would be defined.
    • PEERDNS="no" and BOOTPROTO="none" should be defined for all interfaces.
  2. Rebooted the node after the reconfiguration to get the network properly reconfigured.

Defining hostname and the default gateway from the management subnet.

2004-01-21 by Hans Blom
Why: To make access possible to hosts not in the management subnet.
Where: All "VanGogh" nodes.
How:
Define the HOSTNAME and GATEWAY (= 195.169.124.33) in /etc/sysconfig/network.

Defining naming servers that are in the neighbourhood.

2004-01-21 by Hans Blom
Why: Proper and fast resolving of hostnames.
Where: All "VanGogh" nodes.
How: Changed the "nameserver" config. variables in /etc/resolv.conf.

Added addresses of all related hosts from inside and outside of the cluster. The management names of the cluster hosts are set to: vangogh[0-8].saradomain. The domain .saradomain has also been used for the old "wgsara" cluster.

2004-01-21 by Hans Blom
Why:
  1. To define hostnames at all cluster hosts: no DNS available at management subnet.
  2. Easy and consistent aliases to related hosts.
  3. The introduction of ".saradomain" makes it possible to treat in configurations the "wgsara" and "VanGogh" clusters as one cluster without the need of specifying hostnames.
Where: All "VanGogh" nodes.
How: Edited /etc/hosts.

Installed the portable version of MPI V. 1.2.5.

2003-10-31 by Hans Blom
Why: To support MPI oriented programs.
Where: All "VanGogh" nodes.
How:
configure --prefix=/usr/local/MPI -rsh=ssh
make
make install

Categories
CategoryLogs
 Comments [Hide comments/form]
Valid XHTML 1.0 Transitional :: Valid CSS :: Powered by Wikka Wakka Wiki 1.1.6.0
Page was generated in 0.2320 seconds