Quantcast
Channel: Musings of an IT Implementor
Viewing all 141 articles
Browse latest View live

Oracle 11g Methods of Performance Tuning SQL

$
0
0

>90% of upgrade related problems are performance issues after an upgrade.

Source: Oracle Corp


Oracle tools for helping you tune the database:
  • Statspack - FREE - (See note 394937.1)





  • AWR - Diagnostics Pack & Tuning Pack license required.


  • Real Application Testing (Features: SQL Performance Analyser & Database Replay) - Tuning Pack license required.



Since 11g, Oracle recommend, instead of: storing outlines, fixing stats, using SQL hints, using the Rule Based Optimiser (desupported); you should use the SQL Plan Management tool along with SQL Profiling.

See spm_white_paper_ow07.pdf for more information.

Oracle 11g Transparent Data Encrpytion (TDE) Tablespace Conversion Script

$
0
0
After needing to configure and setup an Oracle tablespace in an 11g database so that it was encrypted with Transparent Data Encryption (TDE), I devised a scripted method of doing this quickly and simply.

The script is hardcoded to only work on a tablespace called "USERS_TDE" at the moment, but if you find and replace this for your specific tablespaces. I've even included commented out code showing how to do more than one tablespace at a time.

You should also change the disk locations within the script for the location of the export/import dump files and the wallet file location.
As per the Oracle docs, the wallet file is stored outside of the Oracle home in /etc/ORACLE.

The basic process is:
  • Create wallet dir.
  • Adjust sqlnet.ora.
  • Create dpump dir.
  • Export tablespace data.
  • Export tablespace DDL metadata.
  • Adjust tablespace DDL to include TDE commands.
  • Enable TDE encryption at database level (sets the enc key).
  • Offline & drop existing tablespace.
  • Create TDE version of tablespace.
  • Import tablespace data.

!/bin/bash
#############################################################################
# Author: D.Griffiths
# Script: oracle_TDE_encrypt.sh
# Params: None.
# RunAs:  Oracle home owner.
# Desc:   Script to automatically export a tablespace called USERS_TDE

#         (change as you wish), drop it, re-create it ENCRYPTED with TDE
#         then re-import the data objects back into the tablespace.
#
# HISTORY ###################################################################
# v1.0, Created.
#############################################################################

# Variable definitions.
wallet_dir="/etc/ORACLE/WALLETS/$ORACLE_SID"
walletpw=""
dp_dir="/data/backup/tblspace_export_pre_TDE/$ORACLE_SID" # Datapump export/import dir.

# Check required variables.
if [ -z "$ORACLE_SID" -o `grep -c "^$ORACLE_SID:" /etc/oratab` -ne 1 ] ; then
   echo "ERROR: Invalid ORACLE_SID."
   exit 1;
fi

if [ -z "$ORACLE_HOME" ] ; then
   export ORACLE_HOME = "`awk '/'$ORACLE_SID'/ { split($0,a,":"); print a[2] }' /etc/oratab`"

   if [ -z "$ORACLE_HOME" ] ; then
      echo "ERROR: Failed to set ORACLE_HOME."
      exit 1;
   fi
fi

# Show setup.
echo "------------------------------------------"
echo "Starting TDE setup script."
date
echo "The following has been defined:"
echo " ORACLE_SID : $ORACLE_SID"
echo " ORACLE_HOME : $ORACLE_HOME"
echo " WALLET DIR : $wallet_dir"
echo " DATAPUMP DIR: $dp_dir"
echo "------------------------------------------"
echo ""

# Check if DB already has wallet/encryption.
retval="`sqlplus -s \"/ as sysdba\"<<EOF
set head off
set newpage none
set feedback off
set trimout on
set tab on
select DECODE(count(status),0,'NONE','SOME') from v\\\$wallet;
EOF`"

if [ "$retval" != "NONE" ] ; then
   echo "ERROR: Encryption may already be enabled on this database."
   exit 1;
 else
   echo "Encryption is not already enabled on this database."
fi

# Check for existence of Wallet Dir.
if [ ! -d $wallet_dir ] ; then
   mkdir -p $wallet_dir
   if [ $? -ne 0 ] ; then
      echo "ERROR: Failed to create wallet dir $wallet_dir"
      exit 1;
   fi
   

   echo "WALLET directory created: $wallet_dir"
 else
   echo "WARNING: Wallet directory already exists:"
   echo "$wallet_dir"
   echo -n "Continue to use this? {Y|N} : "
   read reply
   if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
      exit 1;
   fi
 

   echo "Using wallet dir: $wallet_dir"
fi
 

if [ ! -d $dp_dir ]; then
   mkdir -p $dp_dir;
   if [ $? -ne 0 ] ; then
      echo "ERROR: Failed to create datapump dir $dp_dir"
      exit 1;
   fi
 else
   echo "ERROR: Datapump dir already exists, files can not be overwritten."
   echo "Check: $dp_dir"
   exit 1;
fi

# Check sqlnet.ora file.
if [ "`grep -c '^ENCRYPTION_WALLET_LOCATION' $ORACLE_HOME/network/admin/sqlnet.ora`" -ne 1 ] ; then
   echo "Adding ENCRYPTION_WALLET_LOCATION to sqlnet.ora in $ORACLE_HOME/network/admin/sqlnet.ora"
   cat <<EOF >>$ORACLE_HOME/network/admin/sqlnet.ora
# Added WALLET location for TDE see 1228046.1.
ENCRYPTION_WALLET_LOCATION =
(SOURCE = (METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = ${wallet_dir}/)))
EOF

   if [ $? -ne 0 ] ; then
      echo "ERROR: Failed to update sqlnet.ora."
      exit 1;
   fi
 else
   echo "Section ENCRYPTION_WALLET_LOCATION already exists in sqlnet.ora"
fi

# Get list of unencrypted non-system tablespaces
list_tblspace="`sqlplus -s \"/ as sysdba\"<<EOF
ttitle off
btitle off
set newpage none
set feedback off
set head off
set wrap off
set trim on
set tab on
set linesize 30
set pagesize 1000
-- select rpad(UPPER(t.name),30)
-- from v$tablespace t
-- where t.name not in ('SYSTEM','SYSAUX','UNDO','TEMP')
-- and t.ts# not in (select ts# from v$encrypted_tablespaces);
select rpad('USERS_TDE',30) from dual;
EOF`"

# Confirm tablespaces.
echo ""
echo "The following non-system unencrypted tablespaces have been found:"
echo "$list_tblspace"
echo -n "Setup these for TDE? {Y|N} : "
read reply

if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
   exit 1;
fi

# Check dbdump oracle directory doesnt exist.
retval="`sqlplus -s \"/ as sysdba\"<<EOF
set head off
set newpage none
set feedback off
set trimout on
set tab on
select trim(directory_path) from dba_directories
where directory_name='DPDUMP_BACKUP';
EOF`"

if [ -n "$retval" ] ; then
   echo "WARNING: Oracle directory DPDUMP_BACKUP already exists."
   echo "It is assigned to path: $retval"
   echo "OK to recreate it to $dp_dir"
   echo -n "Enter {Y|N} : "
   read reply
   if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
      exit 1;
   fi
fi

# Create the datapump export/import dir in oracle.
sqlplus -s "/ as sysdba"<<EOF
create or replace directory dpdump_backup as '$dp_dir';
EOF

echo "------------------------------------------"
echo "Exporting tablespaces and generating DDL."

# Export the tablespaces to the backup directory location and generate DDL.
echo "$list_tblspace" | while read tablespace
do
   expdp userid="'/ as sysdba'" dumpfile=$tablespace.dmp directory=dpdump_backup logfile=${tablespace}_exp.log tablespaces="$tablespace"

   # Generate the tablespace creation DDL to the backup directory location.
   sqlplus -s "/ as sysdba"<< EOF
SET LONG 10000
set head off
set newpage none
set trimspool on
spool $dp_dir/cre_$tablespace.sql
SELECT dbms_metadata.get_ddl('TABLESPACE','$tablespace') FROM DUAL;
SPOOL OFF
EOF

   # Adjust the DDL file to include the ENCRYPTION string.
   cat $dp_dir/cre_$tablespace.sql | grep -e '^.*[A-Z0-9]'> $dp_dir/cre_TDE_$tablespace.sql

   echo " ENCRYPTION using 'AES256'">> $dp_dir/cre_TDE_$tablespace.sql
   echo " STORAGE (ENCRYPT)">> $dp_dir/cre_TDE_$tablespace.sql
 

done

########### ENABLE ENCRYPTION FROM HERE ON IN ################################

while [ -z "$walletpw" ] ; do
   echo -n "Enter the wallet encryption key to use: "
   read walletpw
   echo -n " Re-enter the wallet encryption key: "
   read reply
   if [ "$walletpw" != "$reply" ] ; then
      echo "WARNING: Typed keys do not match."
      echo -n "Try again? {Y|N} : "
      read reply
      if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
         exit 1;
      fi
      walletpw=""
   fi

done

# Alter the DB to enable encryption.
sqlplus "/ as sysdba"<< EOF
alter system set encryption key identified by "$walletpw";
EOF

if [ $? -ne 0 ] ; then
   echo "WARNING: Enabling encryption may have failed in the DB."
   exit 1;
fi

# Change permissions on all files in the wallet dir.
chmod 600 "$wallet_dir/*"

# Offline and then drop the tablespace and datafiles.
# WARNING: You should have a backup at this point.
echo "WARNING: About to offline and drop tablespaces:"
echo "$list_tblspace"
echo "##################################################"
echo "You should check the expdp export logs in $dp_dir."
echo "##################################################"

echo -n "Continue? [Y|N]: "
read reply
if [ "$reply" != "Y" -a "$reply" != "y" ]; then
   echo "Cancelling."
   exit 1
fi

# Offline and drop each tablespace (including datafiles) then re-create and import export dump file.
echo "$list_tblspace" | while read tablespace
do
   sqlplus "/ as sysdba"<< EOF
alter tablespace $tablespace offline;
drop tablespace $tablespace including contents and datafiles;
@$dp_dir/cre_TDE_$tablespace.sql
-- @$dp_dir/cre_$tablespace.sql <<-- Recreates original non-TDE tablespace.
/
EOF

   impdp userid="'/ as sysdba'" dumpfile=$tablespace.dmp directory=dpdump_backup logfile=${tablespace}_imp.log tablespaces="$tablespace"

done

echo "#######################################################"
echo "You should enable auto-open of the wallet if necessary."
date
echo "End of script."
############################################################################

HowTo: OEL/RHEL 5.7 Create New VolGroup and LVol for new disk

$
0
0
If you need to add an additional mount point onto a RHEL or OEL Linux server, here's how to do it using the logical volume manager for maximum flexibility:

We assume that you've added a new physical disk and that it's called /dev/sdb.

First check size of the device to ensure you've got the correct one:

# fdisk -l /dev/sdb

Now create a new primary partition on the disk:

# fdisk /dev/sdb
n        (new partition)
p        (primary partition)
1        (partition number)
<return> for 1st block
<return> for last block
w       (write config)
q       (quit)
Check you can see the new partition:

# ls -la /dev/sdb*

(You should see /dev/sdb1)

Now ensure that you create a new physical volume that the volume manager can see:

# pvcreate /dev/sdb1

Physical volume "/dev/sdb1" successfully created


Create a new Volume Group containing the new physcial partition:

# vgcreate VolGroup01 /dev/sdb1

Volume group "VolGroup01" successfully created


Create a new logical volume inside the volume group:


# lvcreate -L 480GB -n LogVol01 VolGroup01

Logical volume "LogVol01" created


Format the new logical volume using EXT3 (you can choose which version of EXT you want):

# mkfs -t ext3 /dev/VolGroup01/LogVol02


Now you just need to mount the partition up.

Corrupt OEL 5.7 ISO Prevents Boot into Installer

$
0
0
I ran into this little problem whilst trying to install OEL 5.7 into a Hyper-V environment.

"Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(xxx,xx)".



I tried all different manner of parameters with "linux xxxxx" as recommended by the installer.
None of these worked.
It looked like the Hyper-V drivers weren't working at first.

So I re-downloaded the OEL 5.7 ISO, and re-attached to the VM cd-rom.
Then it worked!
Must have been a corrupt OEL 5.7 ISO that prevented it booting into install/setup from Hyper-V.

Network with OEL 5.7 x86_64 install in Hyper-V

$
0
0
When installing Oracle Enterprise Linux 5.7 x86_64 in a Hyper-V 2012 VM, the Linux networking refuses to work with the Hyper-V "Legacy driver" if you have the UEK (unbreakable enterprise kernel) enabled and more than one vCPU.

First, you should always ensure that you add the Hyper-V "Legacy Network driver" to the VM container at the VM creation time to ensure that it will work when you come to install OEL in the VM.

Then, to get around the problem with the networking and vCPUs, disable the UEK kernel and shutdown, then you can add more than one vCPU to the VM.

HowTo: Use Oracle BBED to adjust DB Name in File Headers

$
0
0
HowTo: Use BBED to hack the database SID in the datafiles if you've gone and got them all mixed during a "CREATE CONTROLFILE" operation.

WARNING: Using BBED is not supoprted by Oracle unless you are asked to use it by Oracle Support themselves.


Use UNIX vi to create a text file that contains a line number, followed by the file name for the DB files that need changing:

# cat <<EOF > filelist.txt
1 /db/ora/system/system1.dbf
2 /db/ora/data1/data1.dbf
3 /db/ora/index1/index1.dbf
EOF


Save the file as "filelist.txt".

Launch bbed (blockedit) as the Oracle DB UNIX owner.
Change the text "NEWID" for your new DB name in the "modify" line below.

$ bbed
BBED> set listfile 'filelist.txt'
BBED> set mode edit


# Dump the current block value for datafile #1 in your list file.
# exmaple: BBED> dump /v dba <file#>,<block> ...

BBED> dump /v dba 1,1 offset 32 count 16

Make the swap:

BBED> modify /c NEWID file 1 block 1 offset 32

The checksum is now invalid:

BBED> sum file 1 block 1

Force save the new checksum:

BBED> sum file 1 block 1 apply

Verify the block:

BBED> verify file 1 block 1

Once you’ve done all your files:

BBED> quit;

Start the database with the CREATE CONTROLFILE SET DATABASE "NEWID"...

SAP PI URLs - Wiki Link

SAP R3load table splitter - Table Analysis Performance

$
0
0
Be careful when using the R3load table splitter from the Software Provisioning Manager screens.
You are asked to supply your table split file, in the install guide, in the format "<TABLE>%<# SPLITS>".
However, this does not supply the table column to split by.

When splitting large tables, during the Table Splitting preparation phase (before you even start exports), R3ta can run for quite a while whilst is performs scans of the available INDEXES and then COUNTs the number of rows in the table(s) to be split.

It's trying to define the most apt column(s) to use for generating the WHR files which contain the query predicates.

I tried adding a specific column during the initial table splitter screens, where you can specify the column to use. However, this seems to be completely ignored.

The best advice, is to prepare your table split during your proof phase in the sandbox environment, then potentially manually adjust the final WHR file to account for any additional rows in the table(s) to be split.
This will save a lot of time and effort in the true production conversion run.

Also, ensure that the indexes on those tables, especially the ones that the WHR predicates refer to, are rebuilt if possible.

SAP Unicode Conversion Nametab Inconsistency

$
0
0
During a unicode conversion of a SAP NW731 system, I saw a problem where a number of BI FACT tables (/BIC/E*) were present in the SAP nametab, existed in the Oracle database, but they didn't exist in the DDIC (SAP data dictionary visible in SE14).

SPUMG nametab

I asked the BI administrator to confirm that these tables were not referenced in the BI cubes, and they weren't.  He suggested that these tables used to belong to a cube that was long since deleted.  This means that at some point there must have been a program bug that has left the nametab inconsistent with the DDIC.
There are no SAP notes about what to do in a situation like this, but there are two options:
1, Exclude the tables from the unicode conversion in transaction SPUMG by adjusting the exceptions list.
or
2, Manually adjust the SAP nametab.
I chose option 2, since this was the cleanest option and would hopefully leave the system in a better state for future updates.

I found SAP note 29159 contained some useful information on a similar subject.  The note suggested writing some simple ABAP code to delete these tables from the SAP nametab tables DDNTT and DDNTF.

Whilst this was simple enough, I decided that I didn't need to go as far as writing ABAP.  I manually removed the entries at the database level using SQL:

SQL> delete from sapsr3.ddntt where tabname ='<TABLE>';
SQL> delete from sapsr3.ddntf where tabname ='<TABLE>';


Then restarted the system (or you could sync the nametab buffer with "/n$NAM").
This fixed the issue and allowed the unicode conversion to continue.

UPDATE: I've since found that it's possible to list the contents of the Nametab buffer and delete specific tables from the buffer using the function modules DD_SHOW_NAMETAB and DD_NAMETAB_DELETE.

Find RMAN Backup Statistics

$
0
0
You can query the view V$BACKUP_SYNC_IO (for synchronous tape devices) to obtain the average MB transfer speed from RMAN to the tape device (or intermediary software if using the obk interface):

SQL> select avg(EFFECTIVE_BYTES_PER_SECOND)/1024/1024 MB_per_s
       from V$BACKUP_SYNC_IO
      where DEVICE_TYPE='SBT_TAPE';

MB_PER_S
-----------
16.1589822


SAP Kernel librfcum.so Missing

$
0
0
When trying to start a SAP system I got an error from the sapstart.log indicating that librfcum.so was missing.
This was not in any of the exe directories or in any of the Kernel distribution files.

During an upgrade, a kernel was patched with a unicode kernel when it should have been a non-unicode kernel.
The correct kernel patch was then deployed into the central exe directory, but it looks like sapcpe did not correctly detect and replace the kernel files on the other instances.

The solution to the missing librfcum.so problem, was to completely remove the kernel files in the instance exe directories, then manually run sapcpe (sapcpe pf=<instance pf>) to re-copy the files from the central exe directory.

This fixed the issue.

Kill All Oracle Sessions For Specific User - 11gR2

$
0
0
Here's a small script to display, then kill all sessions in an Oracle 11gR2 database for a specific username.

First let's check the sessions of the specific username:

NOTE: Change "<USER NAME HERE>" in both blocks of code, for the actual username to be killed.

SQL> set serveroutput on size 10000;
SELECT sid,serial#
  FROM v$session
WHERE username = '<USER NAME HERE>';


Now run the code to kill the user sessions:

SQL> DECLARE
   dummy NUMBER;
   c     NUMBER := dbms_sql.open_cursor;
BEGIN
   FOR c1_row IN (SELECT sid,serial#
                  FROM v$session
                  WHERE username = '<USER NAME HERE>') LOOP
      DBMS_OUTPUT.PUT_LINE('Killing session: '||c1_row.sid||' serial#: '||c1_row.serial#);
      DBMS_SQL.PARSE(c,'alter system kill session '||''''||c1_row.sid||','||c1_row.serial#||'''',dbms_sql.NATIVE);
      dummy := DBMS_SQL.EXECUTE(c);
      DBMS_OUTPUT.PUT_LINE('Session: '||c1_row.sid||' serial#: '||c1_row.serial#||' killed.');
   END LOOP;

   dbms_sql.close_cursor(c);
EXCEPTION
   WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE(SUBSTR(SQLERRM,1,250));
END;
/


You can now run the check code again to see that all sessions for the user have been killed.

Checking R3load Export Progress

$
0
0
When running R3load to export an Oracle SAP database, it's difficult to see the exact table or tables that is/are being exported.

You can log into the Oracle database during the R3load execution and use the following SQL to follow the progress:
SQL> select sess.process, sql.sql_text
       from v$session sess,
            v$sqltext sql
      where sess.type='USER'
        and sess.module like 'DBSL%'
        and sql.sql_text like '%FROM%'
      order by sql.part;

This will show the OS process ID of the R3load process, plus the table (from the FROM clause)  that is currently being exported.
For large tables, you may be able to see the progress in the V$SESSION_LONGOPS table by looking for rows where TOTALWORK != SOFAR.

SAP Inside Track - Munich 2013 - ABAP Future

$
0
0
On Saturday I attended the SAP Inside Track Munich 2013 event hosted in Munich.
Although my main experience is around BASIS and underlying technologies, the event proved to be very insightful into the future directions of some of the technology.

Some of the presentation slides are now available on the main page:
http://wiki.scn.sap.com/wiki/display/events/SAP+Inside+Track+Munich+2013

Summary of the event for me:
  • ABAP 7.40 will be the release for HANA integration with ABAP.
  • Eclipse will be the future direction for ABAP development, starting with the HANA integration.  Say bye bye to SE80 and SE38.
  • Some new features in the syntactical makeup of ABAP programs means better performance and cleaner code through the introduction of things like inline views.
  • The linkup of ABAP with HANA means that developers can push-down certain new OPEN SQL commands to the HANA layer, instead of returning loads of records and iterating on an in-memory ABAP table.
  • SAP have around 43,000 students from the Munich universities working on a range of SAP products and features.
  • The Eclipse based development tools use the REST protocol for comms via the SAP ICM instead of via RFC like the SAP GUI.
  • NW7.40 includes an improved SQL Trace for HANA.  These new tools are accessible through the new SWLT transaction.
  • Free HANA related courses are available through OpenHPI.com and open.sap.com.
  • The recommended version of 7.40 is SP5, since this includes better integration with the SAP transport management system and removes the need for HANA containers as transports.
Thanks to all the speakers at the event, it was very enjoyable, especially the free beer at the end!

SAP BRTools for SQL Server

$
0
0
There are no BR*Tools (BRbackup, BRarchive etc) or equivalent when running SAP on Windows on SQL Server.
The SAP system uses built in access to the SQL Server database to call "DBCC" related tasks.

SAP R3load Error TOC is Not From Same Export

$
0
0
During an R3load import process, you are importing the data files generated from a successful R3load export.
However, you are seeing errors in the <PACKAGE>.log file along the lines of:
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
(RFF) ERROR: <PACKAGE>.TOC is not from the same export as <PACKAGE>.001

This occurs when the TOC (table of contents) file is corrupt.
The TOC file is generated during the export process and contains the name(s) of the tables in the export package and the number of records inside the relevant data files (files ending with .nnn   e.g. .001).
The corruption can happen if you terminated an export, or an export failed (maybe because of disk space), and you then restarted the export (either through the SUM, SAPInst or by manually editing the .properties file).
If you failed to remove the already generated .TOC file for the failed package, before restarting the export, then the .TOC file will be confused and think that the subsequent export, is an append operation to the existing data file.
A normal .TOC file should have something like:

vn: R7.20/V1.6
id: de1a50a500000043
cp: 4102
data_with_checksum
tab: [HEADER]
fil: <PACKAGE>.001 1024
1 1
eot: #0 rows 20130902160509
tab: <TABLE>
fil: <PACKAGE>.001 1024
2 1024000
eot: #50212404 rows 20130902184553

eof: #20130902184553

A corrupt .TOC file for the same package, would look something like:

vn: R7.20/V1.6
id: de1a50a500000043
cp: 4102
data_with_checksum
tab: [HEADER]
fil: <PACKAGE>.001 1024
1 1
eot: #0 rows 20130902160509
tab: <TABLE>
fil: <PACKAGE>.001 1024
2 1024000
eot: #50212404 rows 20130902184553
tab: <TABLE>
fil: <PACKAGE>.001 1024
1024001 2048000

eot: #50212404 rows 20130903161923

eof: #20130903161923
Notice the additional four lines generated in the file during a second export attempt.
This would cause the import to fail.
It's not possible to adjust the .TOC file manually, as it seems that the .TOC file and the data files are tied with a checksum somehow.
The only time you will find out that the export .TOC files are corrupt, is when you try to import them.  Maybe SAP could write a verification routine into the R3load program.

SAP Netweaver 731 Oracle Create DB Statement

$
0
0
By default, when you use the Software Provisioning Manager (SWPM) to create a new NW731 Oracle database, it will generate and run an Oracle "CREATE DATABASE" statement as follows:

SQL> CREATE DATABASE DB1 CONTROLFILE REUSE 
MAXLOGFILES 255
MAXLOGMEMBERS 3
MAXLOGHISTORY 1000
MAXDATAFILES 1000
MAXINSTANCES 50
NOARCHIVELOG
CHARACTER SET UTF8
NATIONAL CHARACTER SET UTF8
DATAFILE '/oracle/DB1/sapdata1/system_1/system.data1' SIZE 350M REUSE AUTOEXTEND ON NEXT 20M MAXSIZE 10000M EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE PSAPTEMP TEMPFILE '/oracle/DB1/sapdata1/temp_1/temp.data1' SIZE 50M REUSE
AUTOEXTEND ON NEXT 20M MAXSIZE 10000M
UNDO TABLESPACE PSAPUNDO DATAFILE '/oracle/DB1/sapdata1/undo_1/undo.data1' SIZE 100M REUSE AUTOEXTEND ON NEXT 20M MAXSIZE 10000M
SYSAUX DATAFILE '/oracle/DB1/sapdata1/sysaux_1/sysaux.data1' SIZE 200M REUSE AUTOEXTEND ON NEXT 20M MAXSIZE 10000M
LOGFILE GROUP 1 ('/oracle/DB1/origlogA/log_g11m1.dbf') SIZE 200M  REUSE ,
GROUP 2 ('/oracle/DB1/origlogA/log_g12m1.dbf') SIZE 200M  REUSE ;


Notice that both the character set and national character set are UTF8.

SAP Unicode Conversion MAXDOP Parallelism on MS SQL Server

$
0
0
When performing a Unicode conversion of an SAP system running on MS SQL Server database, you get pointed to the Unicode Collection Note 1319517.
This note points to a MS SQL Server specific note (Note 1054852 - Recommendations for migrations to MS SQL Server)  which covers a couple of different scenarios, such as moving to SQL Server from another database provider, or just performing an R3load export/import (like me) to the same platform.

Now from experience, I know that the R3load import is actually pretty quick (especially on Oracle with DIRECTPATH).  What takes the time is the index creation afterwards.  Lots of scanning and I/O needs to be done.  This is where you waste time, and where you could save time.
The note 1054852 mentions the use of the MAXDOP (Maximum Degree of Parallelism) SQL Server parameter that could benefit any subsequent index creation/rebuild tasks performed during an R3load import.
The recommendation in note 1054852 is to change the MAXDOP from the SAP recommended setting of 1, to 4 for the entire SQL Server instance.  NOTE: The maximum MAXDOP in 2008R2 is 1024 (see here and SAP note 1654613).
This is the "hammer" approach and should be used with caution.  There is a nice blog here on the use of MAXDOP with SQL Server.  The blog shows how setting this to a value greater than 1 can actually increase fragmentation.  This is completely understandable.  However, the reader must note that the fragmentation is only an issue if the index is specifically set with ALLOW_PAGE_LOCKS to "OFF" (the default in 2008R2/2012 is "ON" (see syntax here)! ).
There is another blog article that shows how this fragmentation problem is overcome by setting the ALLOW_PAGE_LOCKS option.  The article states that by default this is "ON".  However, according to Microsoft KB 2292737, the default is "OFF" by design. 
So which is it?  In fact, the MS syntax for "ALTER INDEX" specifically states that it is not possible to reorganise an index with ALLOW_PAGE_LOCKS set to "OFF".

Here's how to check the value of the ALLOW_PAGE_LOCKS setting for an index (there is no global setting):


use <SAP DB>
go

select name,type,allow_page_locks from sys.indexes
where allow_page_locks != 1
order by 1;





And the results...  well, some of the largest SAP tables in my system have their indexes set with ALLOW_PAGE_LOCKS to "OFF".  Great!

NAME
TYPE
ALLOW_PAGE_LOCKS
ARFCRDATA~0
1
0
ARFCSDATA~0
1
0
COVREF~0
1
0
COVREF~001
2
0
COVREF~002
2
0
D010INC~0
1
0
D010INC~1
2
0
D010TAB~0
1
0
D010TAB~1
2
0
EDI40~0
1
0
EDIDC~0
1
0
EDIDC~1
2
0
EDIDC~2
2
0
EDIDC~3
2
0
EDIDC~4
2
0
EDIDS~0
1
0
EDIDS~1
2
0
EDIDS~2
2
0
REPOLOAD~0
1
0
REPOSRC~0
1
0
REPOSRC~SPM
2
0
TRFCQIN~0
1
0
TRFCQIN~1
2
0
TRFCQIN~2
2
0
TRFCQIN~3
2
0
TRFCQIN~4
2
0
TRFCQIN~5
2
0
TRFCQIN~6
2
0
TRFCQOUT~0
1
0
TRFCQOUT~1
2
0
TRFCQOUT~2
2
0
TRFCQOUT~3
2
0
TRFCQOUT~4
2
0
TRFCQOUT~5
2
0
TRFCQOUT~6
2
0
VBDATA~0
1
0
VBHDR~0
1
0
VBMOD~0
1
0

I can understand the VB* tables might not want index locking and this is also hinted at in the older SAP R/3 Performance Tuning Guide for Microsoft SQL Server 7.0 web page.  However, where did the other tables come from?
I took a look on SAP Notes but I was unable to find anything definitive.  The system I was working on was recently copied and used SWDM (SAP Software Deployment Manager) to perform the post-copy steps, so it's possible that the indexes were automatically adjusted (ALTER'd) to try and ensure a consistent approach.
What to do next?  Well, some of those tables can be ignored since they are supposed to have ALLOW_PAGE_LOCKS set to "OFF", some, like REPOLOAD are re-populated only during a system-copy with R3 tools (e.g. Unicode conversion), so you could try adjusting the setting during the R3load import. 
The others, well in theory you would try to minimise data in those tables (like TRFC*) before you perform a Unicode conversion, so the indexes wouldn't be massive anyway.

At the end of the day, all we are talking about here is a little fragmentation.  So let's move on.

Let's go back to the MAXDOP setting recommendation mentioned in SAP note 1054852. 
Now, my system happens to be a BW type system (it's actually SEM, but this is just BW in disguise), so I found SAP Note 1654613 - SQL Server Parallelism for SAP BW which suggests that for SAP BW systems, you can now manage the MAXDOP parameter settings in table RSADMIN through report SAP_RSADMIN_MAINTAIN by setting parameters MSS_MAXDOP_QUERY, MSS_MAXDOP_<cubename> and MSS_MAXDOP_INDEXING.
The note 1654613 also goes on to say that by applying the note corrections (or the related support package stacks) the default MAXDOP for BW queries is set to 2, and for (process chain) index creations is set to 8.
Aha! 
So the Unicode collection note's setting of 4, could actually contradict the setting of 8 in BW systems!
The note 1654613 also states that you may increase the MAXDOP for queries to more than 4, but this depends on the number of worker threads configured in your SQL Server instance.
The worker threads setting in my SQL Server 2008R2 instance was set to "0", which means the number is dynamically calculated (see here).

You can use the sp_configure procedure to check your setting:

sp_configure @configname='max worker threads';

You can query the current number of threads (thanks to http://blogs.msdn.com/b/boduff/archive/2008/05/17/configuring-max-worker-threads-in-sql-2005.aspx):

select count(*) from sys.dm_os_threads;

My idle database was using 50 threads on a VMWare VM with 4 processors.  So I guess I could increase the MAXDOP for queries permanently in my BW (SEM) system.

You should also note the setting MSS_MAXDOP_INDEXING = 0 means that all available threads will be used during (process chain) index creation.

Summary:
We are advised to set MAXDOP to 4 when moving an SQL Server database or performing a system-copy or Unicode conversion. 
However, more detailed analysis has shown that for BW systems specifically, we can potentially adjust the MAXDOP setting even higher than 4 during our R3load import, to ensure that we make the best use of the available CPU. 
This is underlined by the fact that defaults within the BW system are sometimes higher than the recommended setting of 4.
Therefore, I will be trying out a value of 12 (default of 8 for index creation + 50%) in my Unicode conversion:

sp_configure 'max degree of parallelism', 12;
reconfigure with override

Overview of Tasks for SAP NW731 System Copy - ABAP

$
0
0
Below is an overview of the tasks associated with an SAP NW731 system copy for ABAP on Windows with MS SQL Server (see the Java tasks here).  Essentially this is what I document when I go through the process:

  • Current Base Details - Current Kernel version etc.
  • Current Profile Files
  • Current ABAP License
  • Current SSL PSEs (export PSEs)
  • Current Transport Management System Config
  • Current Database Files
  • Source Database Files
  • Source Disk Usage
  • Target Disk Capacity
  • Current SQL Server Version
  • Current Windows Hotfixes
  • Download SWPM
  • Download Kernel 7.20 SWPM Install Media (UC + NUC)
  • Upload Required Media to Target Server
  • Identify Source Database Backup Media
  • Note Last Backups of Target Database
  • Shutdown Target SAP System
  • Snapshot of Server VM (or Full Server Backup)
  • Detach Old Target Database
  • Delete Old Database Files
  • Create Additional Data File Locations (that don't exist in target)
  • Restore Database as Target
  • Rename Logical Files (MS SQL Server)
  • Deploy SWPM
  • Launch SWPM
Follow on in standard copy doc:
  • Apply Tasks in SAP Note 1817705
  • Truncate specific Tables
  • Check Installation
  • Stop Background Jobs
  • Remove RFC Connections
  • Lock and Adapt Printers & Spool Servers
  • Execute RSPO0041 to Remove Spool Requests
  • Execute RSBTCDEL2 to Remove Job Logs
  • Check Background Job Servers
  • Import Profiles (RZ10)
  • Check Operation Mode (RZ03)
  • Check Logon Groups (SMLG)
  • Check RFC Server Groups (RZ12)
  • Check tRFCs (SM58)
  • ReCreate PSE (STRUST)
  • Change Logical System Name (BDLS)
  • Check Custom External Commands (SM69)
  • Re-enable Background Jobs
  • Schedule Database Check (DB13)
  • Configure STMS
  • Database Stats Collection
  • Re-Install License Key
  • Migrate SecStore (SECSTORE)
  • Access SM21
  • Configure & Schedule Database Backup
  • Adjust Default Database Connection for <sid>adm User (MS SQL)
  • Release VM Snapshot (or send tapes offsite)

SAP HANA "ram or cpu check failed"

$
0
0
When running the SAP HANA 1.0 setup.sh, you may see the error "RAM or CPU check failed".
If you're trying to run HANA in less than 4GB of memory, then you will not be able to. 
This is a hardcoded check value.
Viewing all 141 articles
Browse latest View live