Thursday, March 28, 2013

Tests on Linux Real-Time Kernel


http://www.zeromq.org/results:rt-tests-v031

Conclusion: "Our tests prove that the real-time Linux kernel, specifically, SUSE Linux Enterprise Real Time 10 SP2, is capable of eliminating latency spikes. It is expected that our results would be even more favourable for the real-time Linux kernel if these tests were run on boxes loaded with other tasks, rather than on a clean and idle test environment."

Thursday, March 21, 2013

This one reverses the bits in a word


   n = ((n >>  1) & 0x55555555) | ((n <<  1) & 0xaaaaaaaa);
   n = ((n >>  2) & 0x33333333) | ((n <<  2) & 0xcccccccc);
   n = ((n >>  4) & 0x0f0f0f0f) | ((n <<  4) & 0xf0f0f0f0);
   n = ((n >>  8) & 0x00ff00ff) | ((n <<  8) & 0xff00ff00);
   n = ((n >> 16) & 0x0000ffff) | ((n << 16) & 0xffff0000);

Wednesday, March 20, 2013

Graphite chart Y-Axis scale changes with width and height of graph

https://answers.launchpad.net/graphite/+question/152690

I made the chart wider (1400 pixels for 700 minutes of time), thereby having more pixels than horizontal data points.

I was struggling to find the place to manage my legend for each chart, check out the Apply Function menu, and then go down into Special | Add values to legend name | {choices}

Tuesday, March 19, 2013

Great article on NUMA and mysqld

http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/

Wednesday, March 13, 2013

Nagios OCP Daemon Howto

https://wiki.icinga.org/display/howtos/OCP+Daemon

iptables kmod auto-loading

Something like this happened to us recently..

http://backstage.soundcloud.com/2012/08/shoot-yourself-in-the-foot-with-iptables-and-kmod-auto-loading/

Monday, March 11, 2013

adjust retention time for carbon and resize whisper files

some graphite storage-schema.conf examples out there only retain data for 1 day.


[default_1min_for_1day]
pattern = .*
retentions = 60s:1d


this comes as a surprise later when you go back to look at your data and its not there...its been rotated out by carbon due to the geometry of the whisper file. the whisper file has a finite size...ya dig?

Update the retention rule in /opt/graphite/conf/storage-schemas.conf


[default_1min_for_1year]
pattern = .*
retentions = 60s:525600


#adjust existing whisper files

find /opt/graphite/storage/whisper -type f -name "*.wsp" | xargs -I{} whisper-resize.py {} 60:5256000

#restart carbon

/usr/bin/python /opt/graphite/bin/carbon-cache.py --config=/opt/graphite/conf/carbon.conf start


root@nagios4.sv3:~$ ls -la /opt/graphite/storage/whisper/prod/web1/apache/
total 43272
drwxr-xr-x 3 root root    4096 Mar 11 20:15 .
drwxr-xr-x 9 root root    4096 Feb 25 05:19 ..
-rw-r--r-- 1 root root 6307228 Mar 11 20:34 busy_workers.wsp
-rwxr-xr-x 1 root root   17308 Mar 11 20:15 busy_workers.wsp.bak


the old file is there, thats how big a file is that keeps data for a single day at 1min interval.
the other file represents a year at 1min interval. makes it easy to do capacity planning for monitoring.

find /opt/graphite/storage/whisper -type f -name "*.wsp.bak" | xargs -I{} rm -f {}

Saturday, March 9, 2013

script to get sha-256 hash of myql table descriptions

#!/bin/bash

# Generate SHA-256 hash database schema 
# and a hash of each individual table schema
# then we can see where changes have taken place.

usage() {
cat <

usage: $0 -d DNAME -h DBHOST

  -d  db name
  -h  db host

EOF
exit 1
}

while getopts "d:h:" OPTION; do
  case "$OPTION" in
    d) DB="$OPTARG" ;;
    h) DBHOST="$OPTARG" ;;
    \?) echo "Invalid Option: -$OPTARG" >&2
        usage
        exit 1 ;;
    *) usage
        exit 1 ;;
  esac
done

#enforce argument policy
[[ -z "$DB" ]] && usage;
[[ -z "$DBHOST" ]] && usage;

Q=`echo TRGtZ123Ec234REpKCg== | base64 -i -d -`

echo $DB_VERSION

#dump the schema and hash the whole thing
DBSCHEMA=`mysqldump -h$DBHOST --no-data -p$Q -uroot $DB`
DHASH=`echo $DBSCHEMA | openssl dgst -sha256`
echo "schemadump:"$DHASH

#get tables in the db
TABLES=`mysql --skip-column-names -h$DBHOST -p$Q -uroot $DB -e "SHOW TABLES;"`

#show the tables so we see what it truly is
echo $TABLES;

for i in $TABLES; do 

    TABLESCHEMA=`mysql -h$DBHOST -p$Q -uroot $DB -e "desc $i;"`
    THASH=`echo $TABLESCHEMA | openssl dgst -sha256`
    echo $i:$THASH

done

Friday, March 8, 2013

bash arrays

http://www.thegeekstuff.com/2010/06/bash-array-tutorial/

Saturday, March 2, 2013

how to brain transplant a linux system from Dell to HP C-class Blade

How to brain transplant linux:

Use install media to bring new blade host to base OS.
- PXE Boot to rescue mode, follow instructions to shell
- Verify partitions (because these Blades had CentOS installed on them for testing purposes, the partitions should be OK, but best to be sure):

 fdisk -l
 Device    Boot      Start         End    Blocks   Id  System
 /dev/sda1   *           1       6774   54412123+ 83  Linux
 /dev/sda2           6775       7297     4200997+ 82  Linux swap / Solaris

Ensure you've mounted your disk properly with your rescue operation (mount should show /dev/sda1 mounted as /mnt/sysimage/)
 Unmount the rescue proc and sys
 umount /mnt/sysimage/proc
 umount /mnt/sysimage/sys
 umount /mnt/sysimage/dev/pts
 umount /mnt/sysimage/dev
 umount /mnt/sysimage/selinux

Remove the old OS, you don't need that anymore:
 cd /mnt/sysimage/
 rm -rf *
Remake your proc and sys and dev folders:
 mkdir proc sys dev

Take note of the IP you picked up from DHCP on vlan1:
 ifconfig eth0

+++
Login to your source system

Disable crontab schedules for various jobs

Shutdown application services and other running resources on the source system

Cleanup /var/spool/clientmqueue
 find /var/spool/clientmqueue -type f -mtime +1 -exec rm {} \;

Cleanup /home/backups/
 Verify source is not larger than 50GB

Tar > netcat the file system of your source DM over to new device

On your new host:
 nc -l -p 5555 | tar xvvf -

On your source host, in a screen session:
tar cvvf - bin boot etc home lib lib64 lost+found media misc mnt net opt root sbin selinux srv tmp usr var | nc 5555


 *nc on centOS does not have the -q option that modern variants of nc have

Once that is completed (they both should die elegantly) chroot to your new environment, and make appropriate changes to grub, fstab, and mtab, and then run grub-install /dev/sda to install the new MBR to the new drive:

chroot /mnt/sysimage
mount -t proc proc proc
mount -t sysfs sysfs sys
cd dev
MAKEDEV generic
grub-install /dev/sda
cd
vim /etc/mtab (change /dev/sda2 to /dev/sda1)
vim /etc/fstab (change LABEL=/1 to /dev/sda1 and LABEL=SWAP-sda3 to /dev/sda2)
vim /boot/grub/menu.lst (change all hd0,1 to hd0,0)

Shutdown the source system and shut the switchports going to that system.
Remove the mac address line from the network-scripts configs
Reboot the new HP blade server.

Once reconnected to the internet, verify nagios checks are coming back good.
 Deactivate any OMSA specific checks for the DM in Nagios
 Configure the Dell OMSA gear to not startup:
 chkconfig dsm_om_connsvc off
 chkconfig dsm_om_shrsvc off
 chkconfig dsm_sa_ipmi off

Install the HP SIM Software *NOTE: voip1-8.sv3 are i686, and voip9 is cents 6.X*
For Centos 5.X server i686 (voip1-8): wget http://admin1-1.sv3.somedomain.com/hpsim/bootstrap.sh
bootstrap.sh ProLiantSupportPack
For Centos 6.x x64 server (voip9): wget http://admin1-1.sv3.somedomain.com/hpsim/psp-9.10.rhel6.x86_64.en.tar.gz
For Centos 5.x x64 servers (voip10-27): wget http://admin1-1.sv3.somedomain.com/hpsim/psp-9.10.rhel5.x86_64.en.tar.gz

yum install -y hp-health hp-smh-templates hp-snmp-agents hpacucli hpdiags hpmouse hponcfg hpsmh cpqacuxe

cd /tmp
wget http://labs.consol.de/download/shinken-nagios-plugins/check_hpasm-4.6.3.tar.gz
tar zxvf check_hpasm-4.6.3.tar.gz
cd check_hpasm-4.6.3
./configure --enable-hpacucli
make
cp -av plugins-scripts/check_hpasm /usr/local/nagios/libexec/

Added to /usr/local/nagios/etc/nrpe.cfg in command definition section:

command[check_hpasm]=/usr/local/nagios/libexec/check_hpasm $ARG1$

Ran 'visudo' and changed Nagios permitted commands to:

nagios        ALL=(root) NOPASSWD: /usr/sbin/smartctl, /sbin/hpasmcli, /sbin/hpacucli, /usr/sbin/hpacucli
Defaults:nagios !requiretty

uncomment the crontabs


Friday, March 1, 2013

twitter api notes


http://apiwiki.twitter.com/

API is entirely HTTP-based

The Twitter API supports UTF-8 encoding. Please note that angle brackets ("<" and ">") are entity-encoded to prevent Cross-Site Scripting attacks for web-embedded consumers of JSON API output. The resulting encoded entities do count towards the 140 character limit. When requesting XML, the response is UTF-8 encoded. Symbols and characters outside of the standard ASCII range may be translated to HTML entities.

Two APIs - REST and Search.

    The Twitter REST API methods allow developers to access core Twitter data. This includes update timelines, status data, and user information.

    The Search API methods give developers methods to interact with Twitter Search and trends data. The concern for developers given this separation is the effects on rate limiting and output format.


Rate Limiting

    REST API
        150 calls per hour
        The REST API does account- and IP-based rate limiting. Authenticated API calls are charged to the authenticating user's limit while unauthenticated API calls are deducted from the calling IP address' allotment.
        Rate limiting only applies to methods that request information with the HTTP GET command. API methods that use HTTP POST to submit data to Twitter, such as statuses/update do not affect rate limits.
        Can request whitelisting to make up to 20000 requests per hour.

    Search API
        The Search API is rate limited by IP address. The actual limit is not specified but it is quite high.
        requires that applications include a unique and identifying User Agent string. A HTTP Referrer is expected but is not required.