页面

Tuesday, March 15, 2011

Power Path Migration Enabler Host Copy

PPME requires powerpath fully installed and powerpath must manage the devices. However, multipathing license is not required, but server must be under powerpath maintainence. (a single active path while redundant paths are in an "unlicensed" state).

Powerpath can present devices as native devices or pseudo devices. Powerpath automatically creates pseudo devices in support operating system.
A pseudo device is a special device created by powerpath. Powerpath supports up to 32 paths per device. Powerpath supports pseudo devices on all platforms excpet HP-UX.

PPME steps:
  1. Install Powerpath.
  2. Install PPME license.
  3. Identify source and target LUNs.
  4. Zone and mask target LUNs to the host
  5. Setup migration (use powermig setup)
  6. Start bulk data copy and cloning of application writes. (use powermig sync)
  7. Query migration status. (use powermig query)
  8. Pause/Resume migration (use powermig pause/resume)
  9. After copy completes, test the result by starting reads from target by moving from the source select state. (use powermig selectTarget)
  10. Commit the migration (use powermig commit)
  11. Clean up the migration (use powermig cleanup)
  12. Remove the source device.
powermig command:
  • setup - specifies the source and target devices and technolgy type used in the migration
  • sync - starts data copy and write cloning process while application continues uninterrupted
  • info - displays information about one or all migration sessions
  • query - displays information about one migration sessions
  • pause/resume - pasuse/resume a migration process
  • throttle - change the bulk data rate for a migration
  • abort - aborts a migration and returns the migration session to setup state
  • selectSource - designates the source logical unit as the recipient of all I/O requests
  • selectTarget - designates the target logical unit as the recipient of all I/O requests
  • commit - commits the migration by permanently designating the target as the recipient of all I/O requests. Source device is no longer synchronized with the target
  • undoredirect - stop redirection of I/O from source to target. Only use when source logical unit is a native device and application I/O is stopped.
  • cleanup - removes selected data on the source or target logical unit so that the abandoned device is left in a safe state.
  • getHandle - retrieves the handle for a migration session in progress
  • version - displays PPME version on host.

Thursday, February 17, 2011

Solutions Enabler

Solutions Enabler provides the host with SYMAPI and STORAPI share libraries for applications and SYMCLI.
Solutions Enabler includes a monitoring option for host with limit SYMCLI action.

In client/server mode, the Solutions Enabler server(storsrvd) listends port: 2707 for client connection. It can be changed at /EMC/SYMAPI/config/daemon_options file as storsrvd:port=nnnn
As client hosts, netcnfg file also can be changed to reflect the non-default port.
When using asynchronous events in client/server mode, the event daemon at client side listens a TCP/IP port. You can configure a specific port for this by adding an entry to /EMC/SYMAPI/config/daemon_options as follows: storevntd:event_listen_port = nnnn

Log: log files are locate at: /EMC/SYMAPI/log/symapi_yyyymmdd.log
config the log file at: /EMC/SYMAPI/config/options as SYMAPI_LOGFILE_RENTENTION = Number_of_days

Before installation:
On AIX: default path is /opt/emc and this can't be changed.
On Windows: carefully choose daemons to start during installation.

Installation:
1. Install SE in client.
2. Install SE in server.
3. Edit netcnfg in client to include hostname, ip of the server.
4. Run stordaemon start storsrvd on server
5. Set environment varables SYMCLI_CONNECT and SYMCLI_CONNECT_TYPE on the client.

Post installation on Windows: (same on Unix)
1. symlmf add - enable SE features
2. symcfg discover - build SYMAPI database
3. add c:\program files\EMC\SYMCLI\bin to MD-DOS path
4. (optional) use README.options file to create options file to modify default behavior of SE.

Unix Install:
1. ./se7200_install.sh -install  or ./se7200_install.sh -file Response_File_Name
    ./se7200_install.sh -increment - it will incrementally add the specific component on an existing installation.

Monday, February 14, 2011

ESRS IP Solutions

ESRS IP Solutions comes with 2 components:

ESRS Gateway Client - single point of entry and exit for all IP-based remote support and most EMC call home notifications.

ESRS Policy Manager - control remote access to the devices and maintain the audit log. This server is designed to be inaccessible to all third parties, including EMC.

Gateway Client and Policy Manger can be HA or even in one single server.
Topology one:
Policy Manager(DMZ) connect with gateway client
Gateway connect with production devices, inbound https(443), passive ftp(21), smtp(25); outbound 443/8443(to EMC), 8090/9443 SSL(to Policy Manager)
Topology two:
Gateway(DMZ), outbound 443/8443
Policy Manger in production network.

The ESRS IP Configuration Tool is used to establish the relationship between policy manager and gateway client. The two are not automatically linked. Customer Environment Check Tool (CECT) can run and test both gateway and policy manager server.

Server Requirement:
1. Gateway Client
Hardware:
Processor — One or more processors, minimum 2.2 GHz, must support SSE2 instruction set (required for FIPS compliance)
Free Memory — Minimum 1 GB of RAM, preferred 2 GB of RAM
Comm — Minimum single 10/100 Ethernet adapter (may require dual 10/100 Ethernet depending on customer network configuration and environment), preferred Gigabit Ethernet adapters, optional additional NIC for data backups
Free Disk Space — Minimum 1GB available for installation (preferably on a storage device of 40 GB or larger for operation)
Software:
Operating system — One of the following (US English only supported):
• Windows Server 2003 R1 or R2, 5.2, 32-bit, SP 1 or 2
• Windows Server 2003 R2, 5.2, 64-bit, SP 1 or 2
• Windows Server 2008, 6.0, 32-bit or 64-bit, IIS 7.0 (R1 only), SP 1 or 2.
NOTE: Windows Server 2008 R2 is not supported.
Microsoft .NET Framework Version 2.0 with SP1 or greater. NOTE: .NET Framework 3.5 and 4.0 are not compatible at this time.
Microsoft Visual C++ 2005 SP1 Runtime Library installed
Microsoft Internet Information Services (IIS) installed on system drive
IIS FTP and SMTP services enabled and configured
EMC OnAlert™ and ESRSConfig user accounts created and configured
Remote Desktop installed
Note: IIS startup type-manual; State-Started.
FTP: Description: ESRS Gateway FTP Site; IP Address: Local IP; Port:: 21
Security Accounts: No (Unchecked)
Home Directory: \EMC\ESRS\Gateway\work\ftproot (Read, Write, Log visists, User Isolation)
SMTP: Description: ESRS Gateway SMTP Site; Domain: emc.com; directory: \EMC\ESRS\Gateway\work\mailroot\Drop
User:
1) username: OnAlert; password: EMCCONNECT; User must change password at next logon: No; Pasword never expores: Yes
2) username: ESRSConfig; password: esrsconfig; User must change password at next logon: No; Pasword never expores: Yes
Port:
outbound
443/8443(https to emc and policy manager);8090(http to policy manager);
inbount
443(https);21(ftp);5400-5413(IIS);25(SMTP);

2. Policy Manager
Harware:
Processor — One or more processors, each 2.1 GHz or better
memory — Minimum 2 GB RAM, preferred 3 GB RAM
Comm — Minimum single 10/100 Ethernet adapter (may require dual 10/100 Ethernet adapters depending on customer network configuration and environment), preferred one Gigabit Ethernet adapter, optional additional NIC for data backups
Free Disk Space — Minimum 2 GB available (preferably on a storage device of 80 GB or larger)
Software:
Operating system — One of the following: (US English only supported)
• Windows XP, SP2 or later
• Windows Server 2003
• Windows Vista
• Windows Server 2008, 6.0, 32-bit or 64-bit (R1 only), SP1 or 2, NOTE: Windows Server 2008 R2 is not supported
Microsoft .NET Framework Version 2.0 with SP1 or greater is required if you are using the Customer Environment Check Tool (CECT) to validate that the PM server is setup correctly to install the PM software.
NOTE: .NET Framework 3.5 and 4.0 are not compatible at this time.
Microsoft Windows Task Scheduler running and unrestricted
Remote Desktop installed
Port:
Outbound:
25(smtp);
Inbound:
8090(http);8443(https)

Port requrirement for device managed by Gateway client
Brocade-B 22,23(inbound)
Symmetrix - outbound: https,ftp,stmp to Gateway; inbound: 9519,5414,1300,1400,4444,5555,7000,23003,23004,23005 from Gateway.

Things to know:
  • When a alert occurs, storage system generates an event message file and passes it to ConnectEMC services to format the files.
  • ConnectEMC then uploads file to Gateway by https,ftp or smtp. Gateway compresses the file, open ssl tunnel to EMC and transfer.
  • Gateway polling every 2 minutes from policy manager to inquire new policy. Between them, use cache.
  • Gateway sends hearbeat to EMC every 30 seconds to inform the connectivity between gateway and devices. EMC monitors the heartbeat and may tirgger service requests if something wrong.
  • Gateway inquiry device every 60 minutes to see if device is responseding.
  • Gateway in a High Availability configurations are active peers. There is no direct communications between the Gateway clients within the cluster. In HA environment, the policy manager cannot co-located on a Gateway server. Gateway servers synchronized by EMC
  • enterprise server duing polling cycles.To implement a High Availability Gateway Cluster configuration, your EMC Global Services professional will create the cluster relationship from the Device Management utility that is part of the EMC enterprise.
  • Gateway Extract utility(GWExt) comes with gateway installer can collect serial number,product tyep, IP and transport it to Gateway client. It can also transfer the file to EMC via Gateway.
  • * Gateway installation will invole digital certificates install(only by emc). Certificate cannot be copied and used on another machine. That's why EMC will perform the software upgrade of Gateway.
  • The remote connections are initiated by an EMC Global Services request and through a pull connection by the ESRS. EMC never initiates a connection to your ESRS IP Client or network.
  • Customer Environment Check Tool (CECT) in on ESRS IP Solution CD. To verify server meet all the configuration. Also test connectivity from Gateway to storage. It should run before install Gateway software.
  • ESRS IP Configuration Tool configure gateway to use policy manager,view active remote support sessions,logs,managed devices etc. Tool will be automatically install when you install Gateway using Provisioning Tool.
  • Change the login Banner of policy manager at C:\EMC\ESRS\Policy Manager\Tomcat5\webapps\applications\apm\templates; to login: http://policymanagerip:8090 or https://policymanagerip:8443
  • Policy Manager will automatically backup its database by windows task scheduler under \EMC\ESRS\Policy Manager\hsqldb. Only 31 backups(0-30)

Tuesday, February 8, 2011

Symmetrix BIN file


  • Only used with Symmetrix systems (Enginuity Code)
    .
  • BIN file stands for BINARY file.
    .
  • BIN file holds all information about the Symmetrix configuration
    .
  • One BIN file per system serial number is required.
    .
  • BIN file was used with Symmetrix Gen 1 in 1990 and is still used in 2010 with Symmetrix V-Max systems.
    .
  • BIN file holds information on SRDF configurations, total memory, memory in slots, serial number of the unit, number of directors, type of directors, director flags, engines, engine ports, front end ports, back end ports, drives on the loop, drives on the SCSI bus, number of drives per loop, drive types in the slots, drive speeds, volume addresses, volume types, meta’s, device flags and many more settings.
    .
  • The setup for host connection if the OS is Open Systems or Mainframe environments using FICON, ESCON, GbE, FC, RF, etc is all defined in the BIN file. Also director emulations, drive formats if OSD or CKD, format types, drive speeds, etc is all defined in the BIN file.
    .
  • BIN file is required to make a system active. It is created based on customer specifications and installed by EMC during the initial setup.
    .
  • Any ongoing changes in the environment related to hardware upgrades, defining devices, changing flags, etc is all accomplished using BIN file changes.

.BIN file changes can be accomplished 3 ways.
.
  • BIN file change for hardware upgrades is typically performed by EMC only.
    .
  • BIN file change for other changes that are device, director, flags, meta’s, SRDF configurations etc is either performed through the SYMAPI infrastructure using SymCLI or ECC (Now Ionix) or SMC (Symmetrix Management Console) by the customer. (Edited based on the comments: Only some changes now require traditional BIN file change, typically others are performed using sys calls in enginuity environment)
    .
Solutions enabler is required on the Symcli, ECC, SMC management stations to enable SYMAPI infrastructure to operate.
.
VCMDB needs to be setup on the Symmetrix for SymCLI, ECC, SMC related changes to work.
.
Gatekeeper devices need to be setup on the Symmetrix front end ports for SymCLI, ECC, SMC changes to work
.
For Symmetrix Optimizer to work in your environment, you need DRV devices setup on your Symmetrix.(EDITED based on comments: Only required until DMX platform. Going forward with DMX3/4 & V-Max platforms it uses sys calls to perform these Optimizer changes).
    .
    Back in the day
    All and any BIN file changes on the Symmetrix 3.0, Symmetrix 4.0 used to be performed by EMC from the Service Processor. Over the years with introduction of SYMAPI and other layered software products, now seldom is EMC involved in the upgrade process.
    .
    Hardware upgrades
    BIN File changes typically have to be initiated and performed by EMC, again these are the hardware upgrades. If the customer is looking at adding 32GB’s of Cache to the existing DMX-4 system or adding new Front End connectivity or upgrading 1200 drive system to 1920 drives, all these require BIN file changes initiated and performed by EMC. To my understanding the turn around time is just a few days with these changes, as it requires change control and other processes within EMC.
    .
    Customer initiated changes
    Configuration changes around front end ports, creating volumes, creating meta’s, volume flags, host connectivity, configuration flags, SRDF volume configurations, SRDF replication configurations, etc can all be accomplished through the customer end using the SYMAPI infrastructure (with SymCLI or ECC or SMC).
    .
    Enginuity upgrade
    Upgrading the microcode (Enginuity) on a DMX or a V-Max is not a BIN file change, but rather is a code upgrade. Back in the days, many upgrades were performed offline, but in this day and age, all changes are online and accomplished with minimum pains.
    .
    Today
    So EMC has moved quite ahead with the Symmetrix architecture over the past 20 years, but the underlying BIN file change requirements haven’t changed over these 8 generations of Symmetrix.
    Any and all BIN file changes are recommended to be done during quite times (less IOPS), at schedule change control times. Again these would include the ones that EMC is performing from a hardware perspective or the customer is performing for device/flag changes.
    .
    The process
    During the process of a BIN file change, the configuration file typically ending with the name *.BIN is loaded to all the frontend directors, backend directors, including the global cache. After the upload, the system is refreshed with this new file in the global cache and the process makes the new configuration changes active. This process of refresh is called IML (Initial Memory Load) and the BIN file is typically called IMPL (Initial Memory Program Load) file.
    A customer initiated BIN file works in a similar way, where the SYMAPI infrastructure that resides on the service processor allows the customer to interface with the Symmetrix to perform these changes. During this process, the scripts verify that the customer configurations are valid and then perform the changes and make the new configuration active.
    To query the Symmetrix system for configuration details, reference the SymCLI guide. Some standard commands to query your system would include symcfg, symcli, symdev, symdisk, symdrv, symevent, symhost, symgate, syminq, symstat commands and will help you navigate and find all the necessary details related to your Symmetrix. Also similar information in a GUI can be obtained using ECC and SMC. Both will allow the customer to initiate SYMAPI changes.
    Unless something has changed with the V-Max, typically to get an excel based representation of your BIN file, ask your EMC CE.
    .
    Issues

    Additional software like Symmetrix Optimizer also uses the underlying BIN file infrastructure to make changes to the storage array to move hot and cold devices based on the required defined criteria. There have been quite a few known cases of Symmetrix Optimizer causing the above phenomenon of multiple BIN files. , Though many critics will disagree with that statement. (EDITED based on comments: Only required until DMX platform. Going forward with DMX3/4 & V-Max platforms it uses sys calls to perform these Optimizer changes).
    .
    NOTE: One piece of advice, never run SYMCLI or ECC scripts for BIN file changes through a VPN connected desktop or laptop. Always run all necessary SymCLI / SMC / ECC scripts for changes from a server in your local environment. Very highly recommend, never attempt to administer your Symmetrix system with an iPhone or a Blackberry.

    Monday, February 7, 2011

    EMC Open Migrator (AIX)

    • Open Migrator has no requirement for EMC Enginuity, Solutions Enabler.
    • Both source and target volumes remain synchronized until the session is deactivated.
    •  Does not support Boot disk(root file system devices) migration and automatic volume expansion
    • EMC don't recommend you migrate jfs/jfs2 log volumes on AIX.
    Open Migrator stormigration command:
    Command Argument Action
    stormigrate activate Activates the session.
    add Associates source and target volumes with a created session.
    cleanup Cleans up old session data to restore system to a clean state.
    compare Compares source and target volume data in an activated session.
    complete Change cluster migration state to Complete.
    copy Begins the copying process for volume pairs in an activated session.
    create Defines a new session and assigns a session name.
    deactivate Deactivates an activated session.
    delete Deletes the session.
    list Lists all currently defined sessions.
    pause Pauses a session and temporarily stops the operation.
    props Displays properties for defined sessions.
    query Queries for the status of a given session.
    restart Restarts a failed session.
    resume Resumes a paused session and begins the operation from where it was paused.
    show Displays current configuration data for a given session.
    tune Tunes session performance at the kernel level.
    verify Verifies the existing state of a given session

    Installation:
    # installp -ad ./EMCom_install EMCom


    Operation:

    1. Create a device file that contains a list of the source and target volume pairs for the operation. Two columns, one for source and one for target separated by spaces or tabs.
    /dev/rdsk/c16t0d0 /dev/emc/rdsk/ora1_tgt_9/vol4
    /dev/rdsk/c16t0d1 /dev/emc/rdsk/ora2_tgt_9/vol5
    /dev/rdsk/c16t0d2 /dev/emc/ora3_tgt_9/rvol6
    2. Create a session by using stormigrate create command. The session can be used to migrate or compare data. Then add volume pairs to session.
    # stormigrate create -session session1 -file volume_pairs_1
    # stormigrate list
    # stormigrate show -session session1
    # stormigrate activate -session session1
    # stormigrate copy -session session1

    Friday, February 4, 2011

    Symmetrix VMAX SE configuration

    Load Symmetrix Management Console on the Service Processor; Attach the Service Processor to your network.


    The entry-level configuration. This configuration provides a single VMAX Engine (middle) connected directly to eight disk array enclosures(4 top,4 buttom). The system is expandable by adding a storage bay and populating the bay with additional daisy-chained disk array enclosures. The fully configuration is maximum 24 disk array enclousers, 16 on another storage bay.



    Enginuity 5874 only support 1 GigE I/O; Enginuity 5875 support 10 GigE I/O. Storage cache is 64G(32G usable-read,write mirror)
    ◆ One I/O model that provides two 4-port Fibre Channel multimode I/O modules.
    ◆ One I/O model that provides two 2-port FICON single mode I/O modules.
     ◆ max volume size of 240GB for open systems and 223GB for mainframe systems with 512 hypers per drive.

    Disk Configuration

    Hypervolume - Physical device must be configured into logical unit call hypervolumes. Maximum of 1024 hypervolumes on each physical drive.


    Metavolume - A metavolume is two or more Symmetrix system hypervolumes presented to the host as a single addressable device. Metavolume creation also stripes the volume across back-end directors. Metavolumes can contain a maximum of 256 devices and can be up to 60 TB in size.
    Metavolumes contain a head device, which provides control information, member
    devices, and a tail device.

    ◆ Concatenated metavolumes — Organize addresses for the first byte of data at the
    beginning of the first volume, and continue sequentially to the end of the volume.
    Data is written sequentially, beginning with the first byte

    ◆ Striped metavolumes — Organize addresses sequentially, by using addresses that
    are interleaved between hypervolumes.

    Open Migrator / Live Migration
    EMC Open Migrator/Live Migration (LM) provides on-line data migration of
    Microsoft Windows, UNIX, or Linux volumes between any source and EMC storage.
    Migration is performed from the production host.
    With Open Migration/LM, volumes are on-line and available during critical
    applications like server consolidation, storage upgrades, and performance tuning.
    Open Migrator/LM provides the mirroring and background copy functions that are
    used to synchronize data images on one or more source and target volumes, LUNs, or
    LUN partitions.

    Wednesday, February 2, 2011

    alt_disk_copy

    Most system administrators have experienced the following scenario:
    • A failed ML upgrade.
    • It's getting to the end of the day.
    • You cannot fix it.
    • It's too late to get it resolved by third-party support.
    • You need to back out.
    Typically, this situation requires a rootvg restore, whether it uses a tape mksysb restore or a network boot restore. There is no doubt it is a pain! Using the alt_disk_copy method to take a copy of your rootvg only requires the time it takes to do a reboot to recover your rootvg to the pre-upgrade event. This article demonstrates how to implement alt_disk_copy when applying an AIX upgrade and how to recover rootvg. alt_disk_copy can also be used for testing two different versions of AIX. You simply upgrade one disk then boot off it, and when you need to go back to the other version, simply boot off that disk instead. Indeed, the alt_disk_copy is often used to clone the rootvg to a spare disk for regular on-line backup of rootvg. It can also be used as a hardware migration tool of rootvg.
    This article focuses on a typical rootvg two-disk software mirror set-up. However, alt_disk_copy is not restricted to this two-disk set-up; the same principles apply to multiple software mirroring situations.
    The alt_disk utilities consist of the following commands:
    • alt_disk_copy performs disk cloning.
    • alt_rootvg_op performs maintenance operations on the clone rootvg.
    • alt_disk_mysysb performs a mksysb copy.
    This demonstration does not discuss alt_disk_mysysb.
    The filesets required for the alt commands are:
    bos.alt_disk_install.boot_images
    bos.alt_disk_install.rte bos.msg.en_US.alt_disk_install.rte
    Back to top
    Overview information
    Because the alt_disk_copy command takes a copy of the current running rootvg to another disk, be sure to have all the file systems mounted that you want cloned across. alt_disk_copy only copies the currently mounted file systems in rootvg. There is no need to stop processes to execute alt_disk_copy; however, this process can take some time, so it is best to do it at lunchtime or in the evening (remember it is taking a running copy). Once the copy has completed, you will be presented with two rootvg volume groups:
    rootvg
    altinst_rootvg
    where altinst_rootvg is the cloned non-active/varied off rootvg. The cloned rootvg has all its logical volumes prefixed with the name 'alt'. The boot list is also changed to boot off altinst_rootvg. AIX likes to do things like this; it assumes you will want to boot off the cloned and not the real rootvg. If the system is now rebooted and when the system comes back up, the original rootvg will become:
    old_rootvg
    The original altinst_rootvg becomes:
    rootvg
    If you decide to reboot off the old_rootvg, when the system comes back up, the old_rootvg becomes:
    rootvg
    The rootvg becomes:
    altinst_rootvg
    Do not worry about the renaming of the original and cloned rootvg. I will demonstrate this shortly.
    With a successful completion of an upgrade, the disk containing the cloned rootvg can then be destroyed using the alt_rootvg_op and mirrored back in. If the upgrade event has gone disastrously, there is no real problem--simply take a snapshot for third-party support, then boot off the good rootvg. For users to log in, it is business as normal.
    When you get a response back from support on the fix, during off-line hours, simply reboot off the cloned rootvg and fix the issue. There is no need to go through the time-consuming tasks of re-applying the upgrade because you already have it on the cloned rootvg. Get the upgrade tested, and if it is all OK, destroy the cloned rootvg and mirror back in.
    Do not use importvg or exportvg on the clone rootvg; use the alt commands instead.
    With the cloned rootvg, you can mount the file systems by waking up the disk using alt_rootvg_op. Doing whatever works is required on the cloned file systems, and one would assume here to fix a patch of link, or gather information for third-party support, then put the disk back to sleep, which will also unmount the file systems.
    Excluding directories when cloning
    When cloning, you can exclude certain directories by creating the file: /etc/exclude.rootvg. The entries should start with the ^ /. characters. The '^' means to search for the string at the beginning of the line and the './' means relative to the current directory. You are advised to do this so alt_disk_copy does not misinterpret the command, as it uses grep to search for the string. So, make sure you provide the full pathname, prefixed with '^.' , for example, to exclude the following directories:
    /home/reps
    /opt/installs
    I could insert into the /etc/exclude.rootvg file:
    ^./home/reps
    ^./opt/installs
    Make sure there are no empty lines after the last entry.
    Back to top
    Let's get cloned!
    Let's now go through a typical clone. Assume you have a software two-disk (hdisk0 and hdisk1) mirror of rootvg, and further assume that you are going to do a ML (or application upgrade, assuming it is installed in rootvg) upgrade on this system. I will demonstrate one way this can be done to clone the disk and after a successful upgrade will bring the disk back into rootvg and re-mirror. I will also demonstrate the actions you can take if the upgrade fails.
    Pre-checks
    Before unmirroring the rootvg, first take some time to ensure you are correctly mirrored and have no stale LV's, because if you do, the unmirrorvg will fail. Of course, you could always do a migratepv to move the missing LV's across if the unmirrorvg fails. A simple method to check that you are mirroring is to issue the command:
    lsvg -l rootvg
    For each row of data output, check that the output of the PPs column is double that of the LPs column.
    Another method to check to see if you are mirroring is to use: lspv -l <hdiskx> and compare the output to make sure you have entries for each LV on both disks.
    Next, issue the bosboot command. I personally always do this prior to either rebooting or disk operations involving rootvg; it is a good habit to have:
    # bosboot -a
    bosboot: Boot image is 35803 512 byte blocks.
    A listing of the disks being used for this demonstration is as follows:
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c rootvg active
    Next, unmirror rootvg and take the disk that is going to be used for cloning out of rootvg. This demonstration uses hdisk1 to clone rootvg, so issue the unmirrorvg command:
    # unmirrorvg rootvg hdisk1
    0516-1246 rmlvcopy: If hd5 is the boot logical volume, please run 'chpv -c <disk
    name>'
    as root user to clear the boot record and avoid a potential boot
    off an old boot image that may reside on the disk from which this
    logical volume is moved/removed.
    0516-1804 chvg: The quorum change takes effect immediately.
    0516-1144 unmirrorvg: rootvg successfully unmirrored, user should perform
    bosboot of system to reinitialize boot records. Then, user must modify
    bootlist to just include: hdisk0.
    Next, take hdisk1 out of rootvg in readiness for the cloning:
    # reducevg rootvg hdisk1
    Confirm that the disk is now not assigned to any volume groups:
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c None
    Back to top
    Running alt_disk_copy
    Now you are ready to issue the alt_disk_copy. Simply supply hdisk1 as a parameter to the command. The basic format is:
    alt_disk_copy -d <hdisk to clone rootvg>
    To use an exclude list, the basic format is:
    alt_disk_copy -e /etc/exclude.rootvg -d <hdisk to clone rootvg>
    The following output from the alt_disk_copy command has been truncated:
    # alt_disk_copy -d hdisk1
    Calling mkszfile to create new /image.data file.
    Checking disk sizes.
    Creating cloned rootvg volume group and associated logical volumes.
    Creating logical volume alt_hd5
    Creating logical volume alt_hd6
    Creating logical volume alt_hd8
    Creating logical volume alt_hd4
    Creating logical volume alt_hd2
    Creating logical volume alt_hd9var
    Creating logical volume alt_hd3
    Creating logical volume alt_hd1
    Creating logical volume alt_hd10opt
    Creating /alt_inst/ file system.
    Creating /alt_inst/home file system.
    Creating /alt_inst/opt file system.
    Creating /alt_inst/tmp file system.
    …......
    …......
    for backup and restore into the alternate file system...
    Backing-up the rootvg files and restoring them to the
    alternate file system...
    Modifying ODM on cloned disk.
    Building boot image on cloned disk.
    forced unmount of /alt_inst/var
    forced unmount of /alt_inst/usr
    forced unmount of /alt_inst/tmp
    forced unmount of /alt_inst/opt
    forced unmount of /alt_inst/home
    …..
    …..
    Changing logical volume names in volume group descriptor area.
    Fixing LV control blocks...
    Fixing file system superblocks...
    Bootlist is set to the boot disk: hdisk1
    At this stage, you now have a cloned rootvg called altinst_rootvg. Notice in the previous output alt_disk_copy has changed the bootlist to boot off the cloned rootvg, which is now hdisk1.
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c altinst_rootvg
    This can be confirmed by issuing the bootlist command:
    # bootlist -m normal -o
    hdisk1 blv=hd5
    At this point the ML upgrade can now be installed. After an ML upgrade you will need to reboot the system. For this demonstration, the ML upgrade will be installed on the real rootvg (that is hdisk0), so you need to change the bootlist now, because you want the system to come up with the new upgrade running.
    # bootlist -m normal hdisk0
    Confirm the change of the bootlist:
    # bootlist -m normal -o
    hdisk0 blv=hd5
    Next, install the ML upgrade, then reboot. After rebooting, the system presents the following rootvg and cloned rootvg. As can be seen, no root volume group has been renamed, because we booted off the real rootvg (hdisk0):
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c altinst_rootvg
    Next let's assume everything has gone OK on the upgrade and support users and the systems administrator has signed it off with no issues found. The alt_disk_copy can now be destroyed, and the disk brought back into rootvg for mirroring. Use the alt_rootvg_op command with the X parameter to destroy the cloned rootvg. The basic format is:
    alt_rootvg_op -X < cloned rootvg to destroy>
    # alt_rootvg_op -X altinst_rootvg
    Bootlist is set to the boot disk: hdisk0
    Next, extend rootvg to bring hdisk1, and then mirror up the disk:
    # extendvg -f rootvg hdisk1
    # mirrorvg rootvg hdisk1
    0516-1804 chvg: The quorum change takes effect immediately.
    0516-1126 mirrorvg: rootvg successfully mirrored, user should perform
    bosboot of system to initialize boot records. Then, user must modify
    bootlist to include: hdisk0 hdisk1.
    Change the bootlist to include both disks and run bosboot:
    # bootlist -m normal -o hdisk0 hdisk1
    hdisk0 blv=hd5
    hdisk1
    # bosboot -a
    bosboot: Boot image is 35803 512 byte blocks.
    # bootlist -m normal -o
    hdisk0 blv=hd5
    hdisk1 blv=hd5
    For this demonstration, that's it: mission accomplished. The pgrade is installed with no issues. The system is operational. That's pretty much how alt_disk_copy works if all goes OK. But what if the upgrade fails? What options do you have? Let's look at that next.
    Back to top
    Recovery positions, please
    Let's now assume you have just installed the ML upgrade and rebooted, and issues have been found with the operational running of AIX. Remember, you currently have the disks in the following state:
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c altinst_rootvg
    At this point, a snapshot should be taken of the running system, in readiness for third-party support, for the call that you will undoubtedly log. Taking stock of the current situation, you have:
    • rootvg: with post-upgrade issues.
    • altinst_rootvg : with good copy pre-upgrade.
    Take me back
    To get back to the pre-upgrade, simply change the bootlist to boot off the (altinst_rootvg) hdisk1, then reboot. It's that simple:
    # bootlist -m normal -o hdisk1
    hdisk1 blv=hd5
    # bootlist -m normal -o
    hdisk1 blv=hd5
    # shutdown -Fr
    After the reboot, you will be presented with the following rootvg disks:
    # lspv
    hdisk0 0041a97b0622ef7f old_rootvg
    hdisk1 00452f0b2b1ec84c rootvg active
    Next, issue a bosboot and confirm the bootlist:
    # bosboot -a
    bosboot: Boot image is 35803 512 byte blocks.
    # bootlist -m normal -o
    hdisk1 blv=hd5
    The system is now back to the pre-upgrade state.
    Back to top
    Post upgrade fixing
    At a convenient time schedule that is agreed-upon with the end users, and with information provided by third-party support, you can then boot off the ML failed upgraded disk (hdisk0) and apply a fix that might solve the issue, so change the bootlist to boot off (old_rootvg) hdisk0 and reboot:
    # bootlist -m normal -o hdisk0
    # shutdown -Fr
    After the reboot, in readiness to apply the fix, you will be presented with the following rootvg disks:
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c altinst_rootvg
    Next, apply the fix or instructions on how to fix it have been carried out, and assume the system is now operational again.
    After the system has been tested and signed off bring in hdisk1, use the commands described earlier:
    #
    alt_rootvg_op -X altinst_rootvg
    Bootlist is set to the boot disk: hdisk0
    # extendvg -f rootvg hdisk1
    # mirrorvg rootvg hdisk1
    bootlist -m normal -o hdisk0 hdisk1
    hdisk0 blv=hd5
    hdisk1
    # bosboot -a
    bosboot: Boot image is 35803 512 byte blocks.
    # bootlist -m normal -o
    hdisk0 blv=hd5
    hdisk1 blv=hd5
    # lspv
    hdisk0 0041a97b0622ef7f rootvg active
    hdisk1 00452f0b2b1ec84c rootvg active
    Waking the disk up
    Within a cloned rootvg environment, you can wake up the cloned rootvg to be active. All cloned file systems from the cloned rootvg will be mounted. It is quite useful because you have a good running system, but at the same time mount the file systems from the cloned rootvg for further investigation or file modification. When a cloned rootvg is woken up, it is renamed to:
    altinst_rootvg
    Do not issue a reboot while the cloned rootvg filesystems are still mounted, because unexpected results can occur. You can also rename a cloned rootvg, which is useful when you have more than one cloned rootvg.
    Assume you have the disks in the following state:
    # lspv
    hdisk0 0041a97b0622ef7f old_rootvg
    hdisk1 00452f0b2b1ec84c rootvg active
    To wake up a disk, the basic format is:
    alt_rootvg_op -W -d < hdisk>
    Let's now wake up old_rootvg (hdisk0):
    # alt_rootvg_op -W -d hdisk0
    Waking up old_rootvg volume group ...
    Checking the state of the disks, you can see the old_rootvg has been renamed to altinst_rootvg and is now active.
    # lspv
    hdisk0 0041a97b0622ef7f altinst_rootvg active
    hdisk1 00452f0b2b1ec84c rootvg active
    The cloned file systems have been mounted, with the prefix of /alt_:
    # df -m
    Filesystem MB blocks Free %Used Iused %Iused Mounted on
    /dev/hd4 128.00 102.31 21% 2659 11% /
    /dev/hd2 1968.00 111.64 95% 40407 58% /usr
    /dev/hd9var 112.00 77.82 31% 485 3% /var
    /dev/hd3 96.00 69.88 28% 330 3% /tmp
    /dev/hd1 208.00 118.27 44% 1987 7% /home
    /proc - - - - - /proc
    /dev/hd10opt 1712.00 1445.83 16% 6984 3% /opt
    /dev/alt_hd4 128.00 102.16 21% 2645 11% /alt_inst
    /dev/alt_hd1 208.00 33.64 84% 1987 21% /alt_inst/home
    /dev/alt_hd10opt 1712.00 1445.77 16% 6984 3% /alt_inst/opt
    /dev/alt_hd3 96.00 72.38 25% 335 2% /alt_inst/tmp
    /dev/alt_hd2 1968.00 100.32 95% 40407 59% /alt_inst/usr
    /dev/alt_hd9var 112.00 77.53 31% 477 3% /alt_inst/var
    At this point file modification or further investigation can be carried out on the cloned rootvg. Now you can access the cloned file systems. Once these tasks have been carried out, put the cloned rootvg to sleep and in the same operation issue a bosboot on that disk. The basic format of the command is:
    alt_rootvg_op -S -t <hdisk>
    Let's now put the altinst_rootvg to sleep:
    # alt_rootvg_op -S -t hdisk0
    Putting volume group altinst_rootvg to sleep ...
    Building boot image on cloned disk.
    forced unmount of /alt_inst/var
    forced unmount of /alt_inst/usr
    forced unmount of /alt_inst/tmp
    forced unmount of /alt_inst/opt
    forced unmount of /alt_inst/home
    forced unmount of /alt_inst
    forced unmount of /alt_inst
    Fixing LV control blocks...
    Fixing file system superblocks...
    The current state of the disks is now:
    # lspv
    hdisk0 0041a97b0622ef7f altinst_rootvg
    hdisk1 00452f0b2b1ec84c rootvg active
    From the above demonstration, you can see the cloned rootvg name stayed the same: altinst_rootvg.
    It is sometimes good to go back to the original state of the disks to save confusion, especially if you have more than one cloned disk. So rename altinst_rootvg back to old_rootvg. The basic format is:
    alt_rootvg_op -v <new cloned rootvg name> -d <hdisk>
    So in this example, you would issue:
    # alt_rootvg_op -v old_rootvg -d hdisk0
    # lspv
    hdisk0 0041a97b0622ef7f old_rootvg
    hdisk1 00452f0b2b1ec84c rootvg active
    Of course, you could rename the cloned rootvg to something more meaningful, if so desired.
    # alt_rootvg_op -v bad_rootvg -d hdisk0
    bash-2.05a# lspv
    hdisk0 0041a97b0622ef7f bad_rootvg
    hdisk1 00452f0b2b1ec84c rootvg active
    You cannot rename a cloned rootvg to altinst_rootvg; it is a reserved name.
    From this point, the system is now operational or not, depending on the success of the fix, using the commands described earlier.
    If the fix worked on (old_rootvg) hdisk0, then run with the new ML version.
    Confirm that the disk will boot off hdisk0:
    # bootlist -m normal -o hdisk0
    Reboot:
    # shutdown -Fr
    Destroy the newly cloned disk (we rebooted off old_rootvg; it now becomes altinst_rootvg) hdisk1:
    # alt_rootvg_op -X altinst_rootvg
    Bring in hdisk1 into rootvg for mirroring:
    # extendvg -f rootvg hdisk1
    # mirrorvg rootvg hdisk1
    # bosboot -a
    # bootlist -m normal -o hdisk0 hdisk1
    If the fix did not work, then stay at the same ML version, and fix another day:
    Confirm that the disk will boot off hdisk1:
    # bootlist -m normal -o hdisk1
    Destroy cloned disk (old_rootvg) hdisk0:
    # alt_rootvg_op -X old_rootvg
    Bring in hdisk0 into rootvg for mirroring:
    # extendvg -f rootvg hdisk0
    # mirrorvg rootvg hdisk0
    # bosboot -a
    # bootlist -m normal -o hdisk0 hdisk1
    Back to top
    Conclusion
    This article showed that using the alt command is a quick way to recover rootvg if events go wrong on an AIX upgrade and how to mount the cloned rootvg filesystem on a running system. The alt command can also provide a path to a migration of a rootvg disk to another hardware. It is also very useful in having two different versions of AIX installed for testing migrating procedures.


    Mount a ISO image on AIX 5.3

    The following instructions were cribbed from the Howto mount an ISO image in AIX UNIX


    1. Create a logical volume which is large enough to hold the ISO image: # /usr/sbin/mklv -y testiso -t jfs rootvg 1

      where:
      o
      testiso is the name of the logical volume,
      o
      rootvg is the name of the volume group in which the logical volume is to be created, and
      o
      1 is the size of the logical volume in logical partitions.
      Use the command lsvg rootvg to display (among many other things) the (logical and physical) partition size which will be used in
      rootvg. See here for more information about the AIX V5.3 mklv command.

    1. Define a read-only JFS filesystem on the logical volume: # /usr/sbin/crfs -v jfs -d testiso -m /testiso -An -pro -tn -a frag=4096 -a nbpi=4096 -a ag=8
      Based on the parameters chosen, the new /testiso JFS file system
      is limited to a maximum size of 134217728 (512 byte blocks)
      New File System size is 65536

      where:
      o
      testiso is the name of the logical volume created in the first step and
      o
      /testiso is the mount point of the filesystem.
      See here for more information about the AIX V5.3
      crfs command.

    1. Copy the contents of the ISO image to the logical volume:
      # /usr/bin/dd if=/tmp/bestprac.iso of=/dev/rtestiso bs=1m
      0+1 records in.
      0+1 records out.
       
      where:
      o
      testiso is the name of the logical volume created in the first step, prefixed with the character r and
      o
      /tmp/bestprac.tar is the ISO image file to be examined.
      It may be necessary to use a block size (
      bs=1m) of less than a megabyte if the logical partition size is less than a megabyte. See here for more information about the AIX V5.3 dd command.

    1. Change the filesystem type to cdrfs:
      # chfs -a vfs=cdrfs /testiso

      Note
      It is possible to mount the filesystem by overriding the filesystem type with mount -v cdrfs /testiso, but other problems occur when the time comes to remove the filesystem:
       # rmfs -ir /testiso
      rmfs: Warning, all data contained on /testiso will be destroyed.
      rmfs: Remove filesystem: /testiso? y(es) n(o)? y
      rmfs: 0506-933 /dev/testiso is not recognized as a JFS filesystem.
      rmfs: 0506-936 Cannot read superblock on /testiso.
      rmfs: Unable to clear superblock on /testiso
      rmlv: Logical volume testiso is removed.
       
    2. Mount the filesystem:
      # /usr/sbin/mount /testiso

      The filesystem contains the files which are in the ISO image file /tmp/bestprac.tar:
      # cd /testiso
      /testiso # ls
      bestprac.tar.gz
      /testiso #
    3.  
    1. When the filesystem is no longer needed, unmount and remove it and then remove the testiso logical volume:
      /testiso # cd
      # /usr/sbin/umount /testiso
      # /usr/sbin/rmfs -ir /testiso
      rmfs: Warning, all data contained on /testiso will be destroyed.
      rmfs: Remove filesystem: /testiso? y(es) n(o)? y
      # /usr/sbin/rmlv testiso
      Warning, all data contained on logical volume testiso will be destroyed.
      rmlv: Do you wish to continue? y(es) n(o)? y
      rmlv: Logical volume testiso is removed.

      where:
      o
      testiso is the name of the logical volume created in the first step and
      o
      /testiso is the mount point of the filesystem.
      Be very careful with rmfs. Like the UNIX rm command, rmfs without the -i flag does not prompt for confirmation. It immediately destroys the specified filesystem!
      Notes:
    • The rmfs -r flag will cause the mount point directory to be removed only if the directory is empty.
    • The rmfs command will remove the underlying logical volume when removing a JFS or JFS2 filesystem, but will not do so when removing a CDRFS filesystem.

    TSM DB and LOG

    Increasing the size of the database
    You can increase the size of the database by creating directories and adding them to the database.
    The server can use all the space that is available to the drives or file systems where the database directories are located. To ensure that database space is always available, monitor the space in use by the server and the file systems where the directories are located. The maximum supported size of the database is 2 TB.
    The QUERY DB command, shown in Monitoring the database and recovery log, displays number of free pages in the table space and the free space available to the database. If the number of free pages are low and there is a lot of free space available, the database allocates additional space. However, if free space is low, it might not be possible to expand the database.
    To increase the size of the database, take the following steps:
    1. Create one or more database directories. Locate the directories on separate drives or file systems.
    2. Issue the EXTEND DBSPACE to add one or more directories to the database. The directories must be accessible to the user ID of the database manager. Locate the directories on different drives or file systems. 
    For example, to add two directories to the storage space for the database, issue the following command:
    extend dbspace /tsmdb005,/tsmdb006

    Reducing the size of the database
    If a significant amount of data has been deleted from the database, consider reducing the database size.
    1. Create a file containing a list of directories that represent the new directories. For example, dbdirs.txt.
    2. Run a full database backup. For example:
    backup db devclass=tapeclass type=full
    3. Halt the server.
    4. Remove the database instance. 
    dsmserv removedb TSMDB1
    5. Restore the database specifying the file containing the directories to be used. For example:
    dsmserv restore db todate=today on=dbdirs.txt
    6.Restart the server.

    Increasing the size of the active log
    If the log is running out of space, the current transaction is rolled back, and the server issues an error message and halts. You cannot restart the server until the active log size is increased.
    To increase the size of the active log while the server is halted, complete the following steps: 
    1. Issue the DSMSERV DISPLAY LOG offline utility to display the size of the active log. 
    2. Ensure that the location for the active log has enough space for the increased log size. If a log mirror exists, its location must also have enough space for the increased log size. 
    3. Halt the server. 
    4. In the dsmserv.opt file, update the ACTIVELOGSIZE server option to the new maximum size of the active log, in megabytes. For example, to change the active log to its maximum size of 128 GB, enter the following server option: 
    activelogsize 131072 
    5. If you will use a new active log directory, update the directory name specified in the ACTIVELOGDIR server option. The new directory must be empty and must be accessible to the user ID of the database manager. 
    6. Restart the server.
    Log files of 512 MB are automatically defined until the size specified in the ACTIVELOGSIZE server option is reached. If a log mirror exists, log files are also defined automatically.

    TSM V6 online reorgs

    TSM V6 comes with a great feature - online reorg. However, it's disable by default. If the database is growing every day, you might want to check if the online reorg is enable.

    The following actions can be performed to modify the DB2 policy definitions to allow for automated, online reorgs of the table indexes:
    1. Login to the host machine where the TSM Server is running as the DB2 instance owner (eg. 'tsminst1')
    2. Issue the following command to connect to the TSM database:
    $ db2 connect to tsmdb1
    3. Issue the following command to export the existing DB2 automatic maintenance definitions to a file named AutoReorg.xml:
    $ db2 "call sysproc.automaint_get_policyfile('AUTO_REORG','AutoReorg.xml')"
    4. The file which contains the exported data is placed by the stored procedure in the following location:
    <instance dir>/sqllib/tmp/AutoReorg.xml
    5. Make a copy of the AutoReorg.xml file as this file can be used later in the event it becomes necessary to revert back to the original configuration.
    6. Modify the AutoReorgNew.xml file as follows:
    - change the indexReorgMode value from "Offline" to "Online"
    - change the useSystemTempTableSpace value from "false" to "true"
    7. Issue the following command to activate the new DB2 automatic maintenance definitions:
    $ db2 "call sysproc.automaint_set_policyfile('AUTO_REORG','AutoReorgNew.xml')"Automatic, online reorgs of the database table indexes can now be performed by TSM. The following command can be issued periodically to verify that automatic reorgs of the indexes are occurring as expected:
    $ db2 list history reorg all for db tsmdb1
    NOTE: Should it be necessary to disable automated reorgs of the table indexes, issue the following command to update the DB2 automatic maintenance definitions back to their original values:
    $ db2 "call sysproc.automaint_set_policyfile('AUTO_REORG','AutoReorg.xml')"
    NOTE: The procedures listed above are specific to UNIX hosts.
    These procedures may also apply to DB2 running on Windows hosts with the following distinctions:
    - all DB2 commands must be run in a DB2 command window
    - the policy definition file on Windows is named DB2AutoReorgPolicy.xml. Substitute this policy file name in place of AutoReorg.xml.
    It is recommended that the enabling/disabling of automatic index reorgs be performed at a time when the TSM Server is quiesced.

    TSM SQL

    Some sql for TSM to help you daily TSM admin jobs


    Finding the tapes needed to restore a node
    select distinct node_name,volume_name,stgpool_name from volumeusage where node_name='xxxxx'

    Show all the tapes >20% full
    select 'MARKER', volume_name,pct_utilized from volumes where pct_utilized >20 and (volume_name like '5%' or volume_name like '6%')


    To check yesterday's backup
    select Entity,Successful,Bytes,Examined,Affected,Failed from summary where activity='BACKUP' and cast((current_timestamp-start_time)hours as decimal(8,0)) < 24 order by Entity

    To check a node the filespace name and volume
    Run tt nodename

    To check last 24 hours severity "E" message
    Run seve

    How many client nodes are registered by domain
    select domain_name,num_nodes from domains
    How many client nodes are registered by platform?
    select platform_name,count(*)as "Number of Nodes" from nodes group by platform_name

    Query all tapes for a node
    select distinct node_name,volume_name,stgpool_name from volumeusage where node_name='PWAT1UXSD01'

    Query all tapes for a file
    select volume_name,node_name,filespace_name,file_name from contents where node_name='nwsrv1' and filespace_name='NWSRV1\DATA:' and file_name='\users\jelliott\ofvgoarc\*'


    How many tapes are used by each node
    select count(DISTINCT volume_name) as volumes, node_name, stgpool_name from volumeusage group by node_name, stgpool_name order by volumes desc

    How much data is stored for each filespace? - run tt nodename
    select node_name,filespace_name, physical_mb,stgpool_name FROM occupancy where node_name='ECO36934'  and type='Bkup'

    How many volumes does a storage group use?
    select stgpool_name,count(*) as count from volumes group by stgpool_name

    How many volumes are going offsite? (have to run before tape eject)
    select volume_name,stgpool_name,access from volumes where (stgpool_name='offsite_pool_name') and (access='offsite')

    Display the number of nodes on each tape
    select volume_name, stgpool_name, count(distinct node_name) as nodes from volumeusage group by volume_name, stgpool_name order by 3 desc


    Offsite tapes needed to restore a node
    select distinct volume_name from volumeusage where node_name='xxxxxx'

    Show last 24 hours warning and error log
    SELECT * From ACTLOG Where SEVERITY In ('W','E') And (DAYS(CURRENT_TIMESTAMP)- DAYS(DATE_TIME)) <1

    What tape is used today
    select volume_name,last_write_date from volumes order by last_write_date desc

    Display tape location by home_element
    select home_element, volume_name from libvolumes order by home_element

    Database backup rate and time for last 30 days
    select activity, ((bytes/1048576)/cast ((end_time-start_time) seconds as decimal(18,13))*3600) "MB/Hr", start_time, end_time from summary where activity='FULL_DBBACKUP' and days(end_time) - days(start_time)=0

    q assoc * *   -policy domain name, schedule name, associate nodes

    disable session (client/server/admin/all)

    Customize a schedule to run Mon, Wed, Fri
    DEF SCH domain_name schedule_name T=C ACT=I STARTT=22:00:00 DUR=2 DAY=Monday,Wednesday,Friday

    All tapes for a node
    select distinct node_name,volume_name,stgpool_name from volumeusage where node_name='xxxxx'

    What tapes were used today
    select volume_name,last_write_date from volumes order by last_write_date

    Which volume contain the file
    select volume_name,node_name,filespace_name,file_name from contents where node_name='nodename' and filespace_name='filespace' and file_name='filename'

    Last 24-hours backup
    SELECT activity , CAST(sum(bytes/1024/1024/1024) AS decimal(8,2)) AS "Total GB" FROM summary WHERE activity='BACKUP' AND start_time>=current_timestamp - 24 hours GROUP BY activity

    Data backup by node
    SELECT entity as "Node", CAST(sum(bytes/1024/1024) AS decimal(8,2)) AS "Total MB" FROM summary WHERE activity='BACKUP' AND start_time>=current_timestamp - 24 hours GROUP BY entity

    select * from occupancy

    select sum(physical_mb)/1024 from occupancy where stgpool_name <> '3583COPY'

    select sum(physical_mb)/1024 from occupancy where stgpool_name <> '3584COPYN'

    select sum(num_files) from occupancy

    select sum(est_capacity_mb)/1024 from volumes where devclass_name <> 'DISK'

    select node_name,sum(physical_mb)/1024 from occupancy group by node_name > /tmp/nodes19.03.2006

    select node_name,sum(physical_mb)/1024 from occupancy group by node_name order by 2 desc

    select node_name,sum(num_files) from occupancy group by node_name order by 2 desc

    select * from libvolumes where status='Private' and last_use!='Data'

    select NODENAME as "CLIENT",DATE_TIME,MESSAGE as "ERROR MESSAGES in the LAST 24H" from actlog  where (current_timestamp-date_time)hours between '1' AND '24' and (SEVERITY='E' or SEVERITY='W' ) and MESSAGE not like 'ANR2034E%' and MESSAGE not like 'ANR8214E%' and MESSAGE not like 'ANR8213W%' and MESSAGE not like 'ANR0482W%' and MESSAGE not like 'ANR0480W%' and MESSAGE not like 'ANR2335W%' and MESSAGE not like 'ANE4987E%' and MESSAGE not like 'ANE4005E%'

    select ENTITY as "CLIENT_NAME",sum(BYTES)/1048576 as "MB SENT" from summary where ENTITY in (select node_name from nodes)  and START_TIME >='08/01/2010 07:00:00.000000' and START_TIME<='08/02/2010 23:00:00.000000' group by ENTITY

    select node_name,sum(physical_mb)/1024 from occupancy where stgpool_name <> '3584COPYN' group by node_name order by 2 desc

    select volume_name, read_errors,write_errors from volumes where read_errors>0 or write_errors>0

    SELECT CAST((100 - (CAST(MAX_REDUCTION_MB AS FLOAT) * 256 ) /(CAST(USABLE_PAGES AS FLOAT) - CAST(USED_PAGES AS FLOAT) ) * 100) AS DECIMAL(4,2)) AS PERCENT_FRAG FROM DB  

    select count(*) from contents where volume_name='<vol_name>'

    select volume_name from volumes where stgpool_name='3583COPY'

    select tabname from tables

    select colname from columns

    select colname from columns where tabname='<tabname>' and colname='<colname>'

    find $FLOPPY_DIR -name "WRITE.TOK" | awk '/0624/ {print $0}' | grep -v save

    select node_name from nodes where DOMAIN_NAME='PODO_SQL'

    select node_name,sum(physical_mb)/1024 from occupancy where stgpool_name <> '3583COPY' group by node_name order by 2 desc > /tmp/nodes_occ_21feb07

    select node_name as '"CLIENT"',status as '"STATUS"',DOMAIN_NAME,SCHEDULE_NAME from events where node_name in (select node_name from nodes) and SCHEDULED_START >='$YESTRDAY $START'

    select node_name,sum((capacity+1)*(PCT_UTIL+1)/100) from filespaces group by node_name

     select ENTITY as "CLIENT_NAME" ,sum(BYTES)/1048576 as "MB_SENT" from summary where ENTITY in (select node_name from nodes) and START_TIME >= '2006-01-24' group by ENTITY

    select volume_name,stgpool_name,est_capacity_mb,pct_utilized,status,access,access,location,write_error ,read_errors,times_mounted,devclass_name from volumes

    select volume_name,library_name from libvolumes order by volume_name

    select count(*) from sessions where session_type='Node'

    select volume_name,library_name from libvolumes order by volume_name

    select volume_name from volumes where pct_reclaim >70

    select volume_name,status,last_use from libvolumes where status='Private'

    select node_name,file_name,file_size as "CLIENT" from contents where node_name in ( select node_name from nodes where domain_name='PODO_NT') and file_size \> 10000000000 ( in bytes )

    select cast(float(sum(bytes))/1024/1024/1024 as decimal(14,2)) as TGBU from summary where (end_time between '2010-09-01' and '2010-09-15') and activity='BACKUP'

    client TSM level
    select node_name,cast(client_version as char)||'.'||cast(client_release as char)||'.'||cast(client_level as char)||'.'||cast(client_sublevel as char(2)) as "Client_Version" from nodes

    select node_name,cast(client_version as char)||'.'||cast(client_release as char)||'.'||cast(client_level as char)||'.'||cast(client_sublevel as char(2)) as "Client_Version" from nodes where node_name='ECO49932'

    How many tapes used by TIER-5_TPE at 2009(read date)
    select volume_name,stgpool_name,EST_CAPACITY_MB,PCT_UTILIZED,LAST_READ_DATE from volumes where stgpool_name='TIER-5_TPE' and (year(current_timestamp) - year(LAST_READ_DATE))=1 order by LAST_READ_DATE



    Some DB2 command

    get db2 configuration
    db2 get db cfg for tsmdb1

    check the history database backup
    db2 list history backup all for db tsmdb1

    check the history of reorg
    db2 list history reorg all for db tsmdb1

    check the history of archive log
    db2 list history archive log all for db tsmdb1

    db2 list applications

    db2 list active databases

    db2 list tables for all
    db2 list tables for schema tsmdb1    (tsm tables)

    db2 list utilities show detail   (can monitor online database backup, if a process slow down server, can elect to throttle the utility)

    db2 GET ALERT CFG FOR DATABASES(DBM)

    db2 GET AUTHORIZATIONS