LSI firmware Flash for FreeNAS

flashing my M1015 LSI9211-8i SAS2008 (LSI9240/9211) SAS9240-8i to IR mode for direct disk access:

These URLs list some interesting reads about flashing the firmware of this LSI HBA

http://www.servethehome.com/ibm-serveraid-m1015-part-4/
http://forums.laptopvideo2go.com/topic/29059-lsi-92xx-firmware-files/
http://forums.laptopvideo2go.com/topic/29059-sas2008-lsi92409211-firmware-files/
http://forums.laptopvideo2go.com/topic/29059-sas2008-lsi92409211-firmware-files/page-5
http://brycv.com/blog/2012/flashing-it-firmware-to-lsi-sas9211-8i/
http://www.0x00.to/post/2013/04/07/Flash-IBM-ServeRAID-M1015-to-LSI9211-8i-with-UEFI-mainboard

LSI homepage for the latest IT mode firmware ( I used P20)- this has many tools:

http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9211-8i.aspx#tab/tab4

Lenovo driverpage for SAS HBA for TS440 ( has EFI sheel / tools):

http://support.lenovo.com/us/en/downloads/ds101146

This is the EFI shell that worked for my TS440 (Precompiled x86_64 UEFI Shell v1 binary ):

https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface#UEFI_Shell
https://svn.code.sf.net/p/edk2/code/trunk/edk2/EdkShellBinPkg/FullShell/X64/Shell_Full.efi

This is the P20 driver download location I used for FreeNAS

search for SAS 9211-8i Host Bus Adapter for firmware:
http://www.avagotech.com/support/download-search

OR

download directly FreeBSD driver P20 download file:
http://docs.avagotech.com/docs/12349306

Remember:

The TS140 it’s looking for a specific path and filename. Copy the x86 binary to ‘\efi\boot\bootx64.efi.’ need to rename it to ‘bootx64.efi’ that is what ts140 is looking for and only in that specific directory.

Posted in FreeNAS at December 21st, 2015. No Comments.

Should Redhat VMs share VMware hosts with Windows guests?

I believe no, here’s why:

 

Benefits of dedicated Red Hat VMware environment:

     1. Licensing benefits:

Components used in Redhat environment: MySQL, Red Hat Linux, Red Hat high availability extension, Red Hat resilient storage add-on, Redhat Scalable Filesystem (for CIFS/NFS), JBoss, Oracle.

If we share , all VMs will need a license/subscription ( same cost as unlimited guest license, yes). Suppose we built 20 VMs on difference ESX clusters, we will have to spend $3000 x 20 = $60000 per blade per year in license for just Redhat OS compared to just $3000 x 1 = $3000 per blade per year (unlimited guest type). Interestingly they cost same in Redhat.  Also, this is per year license ( called subscription). All RedHat products that we intend to use, work on same licensing model. ( JBOSS, RedHat Cluster, cluster filesystems. We are going  to use these products for windows samba fileserver cluster etc also).

This is huge benefit in cost because, each software has its software licensing similar, like ( mysql 1-2 core ($12,000/year) – hence 1 license per blade will be needed, for unlimited guests on a  dispersed environment will need one for each VM ( 20 vms will need 20x$12000/year =$240,000 per year just for mySQL), all others are like this and extremely high.

 

      2. Architecture benefits:

 

  • The VMware configurations to allow a VM to kill  another VM when required (needed in HA and clusters) and can be enabled securely safely in Redhat dedicated environments. (windows and Redhat environments are managed by separate groups in my company.
  • Easy resource management and planning as blade dedicated to Redhat ( again because windows and Unix is managed by different groups.
  • Ability to take file system snapshots of VM is for quick backout of a change. (it is disabled in windows server Vmware hosts as they have a different mindset and set of problems in windows environment) – We already use this in zLinux ( logically) by taking DDR copy backups, we won’t get those features if we share with windows due to the storage restrictions.
  • All VMware Enterprise features like VMware HA, VMotion, Storage VMotion, and DRS will be unusable unless in dedicated cluster because some of them like DRS are not usable until all RedHat, mysql  etc licenses are aligned with Unlimited guest type licenses .

 

  3. Functionality benefits:

 

Unix team will be able to manage VMware storage/networking/spin VMs and leverage on VMware support team for VMware ESX server level configuration/maintenance. This is because they give us more control options on our dedicated blades. This speeds up our delivery time drastically and builds the skill across the teams.

  • Ability to disable/enable ethernet connection, create internal vswitches for clusters, iSCSI filesystems etc (by Unix group)
  • Ability to clone a VM at odd hours, change time. (by Unix group)
  • Ability to change CPU and memory of guests and when required (by Unix group as we do in zLinux)

 

For HA strategy and to even begin with it, dedicated Red Hat environment will enable one to configure the VMware options, on storage, networking, snapshotting etc to support Red Hat environment in a better manner.

Because LINUX is not as resource hungry as Windows, dedicated environment will ensure availability performance configurability according to HA and other configurations.

I guess I made my point

Posted in Linux, RedHat, VMware at September 22nd, 2012. No Comments.

Our Experiences with Virtualized SuSe Linux on our Mainframe SLES11 SP2 under z/VM

overall we are very satisfied with the virtualization capabilities of z/VM on our two Z196 and Z9. great performance, reliability, functionality, stability all come together.

it is not ‘bed of roses’ always. there are times when we run into issues but most of the time it just works out well. i am listing some of the challenges we faced recently to help others planning to run Penguin on the mainframe.

File Scan issue

Why file scan occur ?  How are we addressing it ?  How is the grouping being scheduled or force to stagger the

filescan ?  How can we permanently stop the read only if we need to ?

 

We have our zlinux app data on fileserver (NFS and samba), with large filesystems of multi terabytes. when we upgraded them from sles9 to sles11, all filesystems were recreated again almost at the same time. so the problem was that in a boot       once the 180days default fsck was over, all filesystems would scan and one boot of the server accounted for 6+ hrs, a definate outage. ( they say fsck is not required for ext3 -journaled, but we still want to be safe with our data and didn’t want to completely disable it.)

options were:

1) Disable fsck and do manually when we wanted( tedious and prone to mistakes)

2) Stagger the filesystems into group of similar sizes and use tune2fs to tune the parameter that trigger the scan so that all groups dont scan together. this would need scheduled reboot of servers so that we can control and all filesystems dont scan together.-current solution

3) We are looking forward to ways of finding out if we can disable scan at boot time and do the fsck using a script so that we can run parallel scans /or speed up DASD access using parallel HyperPAV etc .

4) If we can’t do the above option, we might have to breakup filesystems into smaller business unit dedicated fileservers to  resolve the issue permanently until it outgrows.- least recommended approach as we are not maxing out on anything(Memory/bandwidth,cpu) on fileserver

 

IBM GA Driver 93 firmware update on the Mainframe –

What did Drver93 actually cause?  Why did it totally disable us ?  How did we circumvent it ?

How did we manage to recover the SMT – then patch it so we can patch other guest that were affected ?

If hipersocket network is disabled do we failover to use internal network ?

 

Driver 93 update required that the Linux kernel version should be greater than version version 2.6.32.29-0.3.1. most of our images were patched 3 month ago and had kernel 2.6.32.27-0.2-default and thus did not meet the requirement ( except our production environment luckily -kernel 2.6.32.59-0.3-default). because of the new feature called QIOASSIST(Queued I/O Assist) on Hipersocket in driver 93, the  Linux kernels using hipersocket panic’ed because IBM decided to keep it enabled by default. Many of our guests including our infrastructure servers like Our SMT ( subscription management Server) server started kernel panic abruptly.

The solution was to repatch the systems to the latest, those which are failing but to be able to patch SMT was required and SMT itself was failing. Fortunately there was a ‘CP QIOASSIST OFF’  option at the guest level by using ‘NOQIOASSIST’ in z/VM guest level which we did on the SMT so it was available for other guests to use for patching. And we also had to do this on every guest we wanted to patch because we did not want the guest to panic while it was being patched.

Kernel panic was happening at the time of hyper socket initialization phase, so the guest that were not using hyper socket were safe but we use it on almost every guest.

Hyper socket was a different network segment and there was no fail-over option thus.

SAMBA –

what development need or business need initiated the requirement to implement Samba ?

Files had to be shared between Windows and Linux. Using NFS client on Windows was not efficient because of the limited ACL and permission related issues and the client software license was to be purchased and did not support multiple NFS shares on the same drive letter. Thus Samba turned out to be a better option because of its ease of use and simplicity and no coding changes required, and being free.

How was it procedural wise to implement ?

Implementation is very easy – install some packages, configure shares, set up user ID mapping. Setup passwords.

Window file system to Linux 

How would you proceed to develop a strategy to migrate window file systems to Linux ?

To be able to successfully migrate Windows file servers to zLinux, Samba seems to be the way -active directory integration is required, the file system and shares should support ACLs. HA is important. OSA performance and throughput is important because for VDI implementation we put even PST files on network, backup should complete timely, lot of small files so an efficient backup system will be required.

Steps necessary ?  What are the implications is using Netapp  OR just San attach fcp ?

Would z/fcp on z/linux be more complicated that redhat Intel migration ?

Steps would be:

  • choose disk devices DASD or zFCP.
  • Choose the cluster file systems because HA is important. Novell SLES has cluster suit option and is free for zLinux.
  • Configure Samba in HA
  • Configure shares and permissions
  • configure active directory integration of Samba
  • size OSAs – possibility of link aggregation?
  • Design reliable and efficient backup infrastructure.

One option is to use a NetApp appliance but we currently don’t have the necessary skill to manage it and are currently reluctant on it. It can be good if designed properly.

Netscaler / Native linux LB –

Why were we not able to upgrade tcp listerner guest to sles11 outright ?

Because of the MAC forwarding method we were forced to use in the new versions of IBM load balancer is the option for TCP listener application to use TCP load balancer was gone for one of our critical application.

what problems did we see ?

Can’t load balance TCP. So different load balancer had to be used, preferably inside the system Z.

Why did Websphere edge technology not suffice ? What is being used now in sles9 for the LB ?

The NAT forwarding method used in IBM load balancer version 6 worked fine for a TCP load balancing application but the new version only supported MAC forwarding.

How was netsclaer implemented ? be as detailed as possible ?  How would we accomplish HA design with netscaler ?

Netscaler has built an HA and two Netscalers can work together to provide HA within load balancer and load balance the TCP application and is a very powerful but pricey load balancing solution for the enterprise. Netscaler has checks for health check of real servers and easy web interface to manage supports VLANs and many advanced features for routing traffic.

How did you develop native linux LB ?  be as detailed as possible ?  How would we implement HA design?

We tested a self-built load balancing solution from the open source community which was a combination of PEN, pound, UCARP ( equivalent of VRRP in open world), and HTTP check TCP check perl scripts from the Nagios world, which worked well in the in VMware  image for our TCP listener service hosted in zLinux, and are currently working to compile the same in system Z as all of these package sources are available for free.

UCARP is the open source world IP failover protocol like VRRP which Cisco claims the ownership for.

Why do these solutions work so well with sles11 upgrade requirements ?

Keeping SLES11 system updated and running latest kernel has really given us a lot of features and enhancements.

DR recovery of z/linux guest 

Describe process you to successful get a successful/recoverable flashcopy backup ?

For zLinux initially our Flash copy backups were failing to the two main reasons

  • Cached file system in the LINUX (data in memory not flushed to disk at the time of Flash copy run)
  • Different timestamp of Flash copy for all volumes of zLinux including members of an LVM.

Solution was simple:

  • We wrote a scheduled sync script  which issued sync command in all zLinux guests just before Flash copy triggered
  • We implemented consistency groups in Flash copy process to make sure all volumes of each guest and get flashed same time.

Could we run emc networker Server and /or client on z/linux ?

EMC networker server is not supported on zLINUX but client is very well supported. If we go zFCP, the only way to backup in our environment would be to use EMC networker client.

Websphere Application server v8 cutover (WAS8)-

Describe the type of problem you and Martin had and what scipts you wrote to reduce the implementation time ?

Since we were doing the entire zLinux cell upgrade together to WAS8 using parallel approach, there was a need to upgrade 35 servers together most of them websphere app servers. Parallel environment was built with WAS8 binaries and all necessary upgraded software stack of agents. The challenge was to switch the identity of these servers -like , VM username ,IP address hostname, hyper socket IP, sync data in the limited 1 1/2 hours window available.

Solution was to:

  • Implement REX scripts to rename VM user at mass.
  • Implement scripts to change identity and all parameters of listed 35 servers AUTOMATICALLY.
  • Implement  rsync scripts to sync data consistency purposes, and run them that time.

All was done perfect in time with zero issues.

Posted in Linux, SystemAdmin, z/VM, zLinux at August 4th, 2012. No Comments.

How to install and configure openvpn on Samsung AT&T Captivate froyo Android 2.2 mobile

 

 

Who doesn’t run openVPN server? I run one at my home to access my home computers. You may be accessing an openvpn service somewhere on internet for security reasons.

I too needed my AT&T Captivate to connect to it. I searched through the amazing XDA-Developer website and found tons of  information on it and even a wonderful youtube video demonstrating how to do it.  But it didnt work for me because of several problems:

 

1) wrong version of openvpn binary

2) incorrect tun.ko module (provides the tun/tap device needed to run Openvpn)

3) incorrect links for route/ifconfig commands

 

Most of the installation work was done by two apps from the Market “OpenVPN Installer” and “OpenVPN Settings”. I then installed OpenVPN binary etc using the installer App, in /system/xbin. It placed a version of openvpn which could not push the routes from openvpn server to the device and I could not get the IP. So i had to use a different version of the OpenVPN binary.

So its going to be a little tricky. Stay with me and it will work!!
first use following basic adb usage tutorial I found. you will need to transfer files and get logs from phone :
http://www.londatiga.net/it/how-to-use-android-adb-command-line-tool/

 

Summary will look like this:

1) Install openvpn two packages from market – “OpenVPN Installer” and “OpenVPN Settings”

2) install the Openvpn binary using the installer app to the location /system/xbin (it will ask).

3) Setup openVPN App setings to:

  1. check load tun kernel module using ‘insmod’ and not ‘modprobe’ in Tun module settings (linux guys know what I am talking about-dont worry if you dont)
  2. path to configurations – /sdcard/openvpn
  3. path to openvpn binary – /system/xbin/openvpn
  4. keep the openVPN checked in the main menu of the settings app.

4) reboot the phone

5) Your existing config of openvpn would mostly work just fine( i had to make no changes to it at all once i used the openVPN binary version I mentioned below).
6) copy all configs and CERTS to the folder  /sdcard/openvpn and they should look like:

-rwxrwxr-x system   sdcard_rw     1188 2010-10-14 00:49 ca.crt
-rwxrwxr-x system   sdcard_rw     3467 2011-05-21 20:46 user23.crt
-rwxrwxr-x system   sdcard_rw      668 2011-05-21 20:46 user23.csr
-rwxrwxr-x system   sdcard_rw      887 2011-05-21 20:46 user23.key
-rwxrwxr-x system   sdcard_rw      993 2011-05-25 01:38 user23.ovpn

7) copied android22tun.ko from the .zip package attached HERE

8) reboot the phone

using openvpn installer on phone,
1.) install binaries to /system/xbin
2.) select path to ifconfig and route -> /system/xbin/bb

The link below allows for ifconfig and route commands to run after connection

make a link to xbin -> “ln -s /system/xbin /system/xbin/bb” using adb shell using adb tutorials.

remember , to run the ln command or any write activity to your /system folder, you will first need to remount this silesystem in rw mode using following command:


mount -o remount,rw /dev/block/stl6 /system

 

This is how your /system/xbin folder looked like for the related files:

-rwxrwxr-x root     root       903648 2011-05-25 01:28 openvpn
-rwxrwxr-x root     root       197193 2011-05-23 23:20 android22tun.ko
lrwxrwxrwx root     root              2011-05-25 00:42 bb -> /system/xbin

I don’t think that matters but still wanted to mention, I used superOneClick to root my phone.

This openvpn binary doesn’t work with busybox version that got installed when I rooted my phone(below):
http://github.com/downloads/fries/android-external-openvpn/openvpn-static-2.1.1.bz2

However this does work, so download it:
http://github.com/downloads/fries/android-external-openvpn/openvpn-static.bz2

overwrite the openvpn file using adb tools in /system/xbin/ as shown in the listing above.

<><><><><>

and here is the demo on a connected system!!!

 

c:\AC_SWM\&gt; adb shell
$ su
su
# /system/xbin/route
/system/xbin/route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.72.13  *               255.255.255.255 UH    0      0        0 tun0
192.168.72.0   192.168.72.13  255.255.255.128 UG    0      0        0 tun0
10.213.28.0    *               255.255.255.0   U     0      0        0 pdp0
192.168.12.0    192.168.72.13  255.255.255.0   UG    3      0        0 tun0
default         10.213.28.1    0.0.0.0         UG    0      0        0 pdp0
 
# /system/xbin/traceroute 192.168.12.1
/system/xbin/traceroute 192.168.12.1
traceroute to 192.168.12.1 (192.168.12.1), 30 hops max, 38 byte packets
1  192.168.72.1 (192.168.72.1)  842.641 ms  513.170 ms  1269.157 ms
2  192.168.12.1 (192.168.12.1)  743.758 ms  856.957 ms  637.559 ms
# /system/xbin/traceroute 192.168.12.13
/system/xbin/traceroute 192.168.12.13
traceroute to 192.168.12.13 (192.168.12.13), 30 hops max, 38 byte packets
1  192.168.72.1 (192.168.72.1)  585.472 ms  677.018 ms  565.116 ms
2  192.168.12.13 (192.168.12.13)  578.847 ms  863.204 ms  646.821 ms
#
 
# /system/xbin/ifconfig
/system/xbin/ifconfig
lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:1273 errors:0 dropped:0 overruns:0 frame:0
TX packets:1273 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:62120 (60.6 KiB)  TX bytes:62120 (60.6 KiB)
 
pdp0      Link encap:Point-to-Point Protocol
inet addr:10.213.28.91  P-t-P:10.213.28.91  Mask:255.255.255.0
UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
RX packets:1012 errors:0 dropped:0 overruns:0 frame:0
TX packets:1141 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:705711 (689.1 KiB)  TX bytes:87133 (85.0 KiB)
 
svnet0    Link encap:UNSPEC  HWaddr A0-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
UP POINTOPOINT RUNNING NOARP  MTU:65541  Metric:1
RX packets:531 errors:0 dropped:0 overruns:0 frame:0
TX packets:297 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14493 (14.1 KiB)  TX bytes:7291 (7.1 KiB)
 
tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.72.14  P-t-P:192.168.72.13  Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:1404 (1.3 KiB)  TX bytes:1128 (1.1 KiB)

 

All the credit goes to OpenVPN guys for making such a wonderful ‘Home’ VPN and XDA-Developers for all this information.

and here are some pictures:

This article is still incomplete, please come back later until you see this line. You are encouraged to Post a comment if you have a question and I will get back to you.

Posted in Android, Uncategorized at May 25th, 2011. 3 Comments.

dos2unix command emulation in Linux

When you create a file in windows OS and you copy the file in Unix and Linux servers, you would find that there are ^M character showing up. Ineterestingly they do not show up in more command but certainly in vi editor.

to convert such a dos file to unix format, solaris has a dos2unix command and also vice versa but this command does not exist in Linux mostly. (I couldn’t find in SLES for system-z).

The trick is to run the following command:

mv sync-it.sh sync-it.sh.bak; tr -d '\r' < sync-it.sh.bak > sync-it.sh

The above example converts the file sync-it.sh from DOS(Carriage return and line break both) format to UNIX(line break only) format.

* carriage return is represented by ‘\r’ and
* line break by ‘\n’

Posted in Linux, Shell, SystemAdmin at October 6th, 2010. No Comments.

Script command to Update config file and avoid duplicate entries in Shell

There are times when you have multiple servers to edit files like /etc/fstab but you want to make sure you don’t have duplicate entries in that file. All us Unix admins know what that means.

1
2
LINE='tools:/mnt/spool      /mnt/spool    nfs     intr    0       0'
grep "$LINE" /etc/fstab >/dev/null  || echo $LINE >> /etc/fstab
Posted in Linux, Shell, Solaris, SystemAdmin, UNIX at October 6th, 2010. No Comments.

Logrotation script

Here is a script that I sometimes use to implement logrotation on my Solaris servers for generic agent log files. I started to use before Solaris 9. The ‘logadm’ utility exists in Solaris 9 and 10 which has got some pretty neat features.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/bin/ksh
# logrotate -- A script to roll over log files
# Usage: logrotate /var/log/authlog [mode [revs] ]
FILE=$1
MODE=${2:-644}
DEPTH=${3:-4}
DIR=`dirname $FILE`
LOG=`basename $FILE`
DEPTH=$(($DEPTH - 1))
if [ ! -d $DIR ]; then
   echo "$DIR: Path does not exist"
   exit 255
fi
cd $DIR
while [ $DEPTH -gt 0 ]
  do
     OLD=$(($DEPTH - 1))
     if [ -f $LOG.$OLD ]; then
         mv $LOG.$OLD $LOG.$DEPTH
     fi
   DEPTH=$OLD
  done
 
if [ $DEPTH -eq 0 -a -f $LOG ]; then
    mv $LOG $LOG.0
fi
cp /dev/null $LOG
chmod $MODE $LOG
Posted in Shell, Solaris, SystemAdmin, UNIX at October 4th, 2010. No Comments.

How to change the hostname in a Sun Container

The easy way to change the hostname of a Sun container server is to

  1. edit /etc/nodename to reflect the new name
  2. revisit the /etc/hosts file to verify that its got the IP/Name of the container correct

There is no need to change the name of the container to be the same as the hostname of the container but I prefer to do so. Hence

  1. I change/rename the zonepath and
  2. change the zonename using zonecfg etc.

That’s it!

Posted in Solaris at October 3rd, 2010. No Comments.