Infocubes In Sap Bi 7.0

Posted By admin On 07/01/22

Blogs

Summary of BI 7.0 performance improvements
Jens Gleichmann
Business Card
Company: Brose Fahrzeugteile GmbH & Co. Kommanditgesellschaft
Posted on Oct. 12, 2010 09:29 AM in BI Accelerator, Business Intelligence (BI)

URL:http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/16742

  • Load Info Cube in SCM 5.0 (BI 7.0) Applies to: SCM 5.0 For more information, visit the. Business Intelligence homepage. The article illustrates the steps to load an info cube in SCM 5.0.
  • New SAP BI 7.0 Authorization concept (analysis authorization) change a lot in accessing, analyzing and displaying BI information. The approach allow to restrict data access on Key figure, Characteristic, Characteristic value, Hierarchy node, and InfoCube levels.It enables more flexible data access management. Analysis authorization is active by default in SAP BI 7.0 systems and I think it is. Remodeling is a new feature available as of NW04s BI 7.0 which enables to change the structure of an InfoCube already loaded. Note: This feature does not yet support remodeling of DSO and InfoObjects. Infocube Creation Step 4 using SAP BI 7.0 In this section, we will see how to define Data Transfer Process and load data in InfoCube using SAP BI 7.0. Data Transfer Process makes the transfer processes in the data warehousing layer more transparent.

    I just want to give you an overview and not go into deep details or exact instructions. Just wanna give you some points from where you can start your analyze and tune your system. You should try out all the tables, views and transactions by yourself.

    1. Performance issues in summary
    2. Query performance analyse
    3. Cache monitor
    4. ST03n
    5. ST13
    6. ST14
    7. Statistics
    8. ST02
    9. BW Administration Cockpit
    10. Optimizing performance of InfoProviders
    11. ILM (Information Lifecycle Management)
    12. BWA
    13. Query analyzing example
    14. General Hints

    1. The common reasons for performance issues in summary


    Causes for high DB-runtimes of queries

    • no aggregates/BWA
    • DB-statistics are missing
    • Indexes not updated
    • read mode of the query is not optimal
    • small sized PSAPTEMP
    • DB-parameters not optimal (memory and buffer)
    • HW: buffer, I/O, CPU, memory are not sufficient
    • Useage of OLAP Cache?


    Causes for high OLAP runtimes

    • high amount of transmitted cells, because read mode is not optimal
    • user exits in query execution
    • usage of big hirarchies


    Causes for high frontend runtimes

    • high amount of transmitted cells and formattings to the front-end
    • high latencies in refering WAN/LAN
    • insuffincient client hardware

    Infocubes In Sap Bi 7.0 User

    2. Query performance analyse

    I think this is a really important point (including the OLAP cache) and should be explained a little bit deeper.

    TA RSRT
    To get exact runtimes for before/after analyze use this transaction with or without Cache/BWA etc.
    choose query execute and debug -> don´t use cache -> show statistic data
    Button Properties
    activate cache mode (also able to activate for the whole InfoProvider)
    you should use the grouping, if you use multiprovider where data of only one Cube are changed independent from the other ones. So you can avoid the invalidation of the cache.
    Following grouping procedures are available:
    1) no grouping
    2) grouping depending on InfoProvider Types
    3) grouping depending on InfoProvider Types InfoCubes Seperately
    4) every Provider seperate
    1) All results of an Infoprovider are stored together. If data of one of the Infoprovider are changed the whole cache must be recreated. This setting should be used when all the Infoprovider, which are used from the multiprovider, have the same load cycle.
    2) All the results are stored grouped by the type of the InfoProvider. This option should be used when a basic InfoCubes are combined with an realtime InfoCube.
    3) Is the same as 2) with additionally the feature that every result of an Infocubes are stored seperately. It should be used when you change/fill the cubes independent from each other.
    4) Every results of a provider will be stored seperated (independent from the type). This option should be used when not only, but also other provider types InfoCubes are updated seperately.

    2.1 RSRT Query Properties

    You can turn off parallel processing for a single query. In the case of queries with very fast response times, the effort required for parallel processing can be greater than the potential time gain. In this case, it may also make sense to turn off parallel processing.

    Just play a little bit with RSRT and the different optionsto get the optimal settings for your queries!

    There are also some special read modes for a query. In the most cases the best choice is 'H' (Query to be read when you navigate or expand hierarchies - more information)

    2.1 RSRT Query properties with grouping

    - Technical Info

    - Performance Info

    -> Useage of aggregates, Cache (+delta), compression, status of requests


    2.2 RSRT Performance Info

    3. Cache monitor


    jump from RSRT into Cache monitor (TA: RSRCACHE)
    Cache parameters
    General infos about cache parameters, check them if they (runtime object and shared memory) are all well sized. Therefore have also a look at the sap help.

    There are 2 types of OLAP Cache, Cross-transaction cache and Local Cache (details on help.sap.com).

    !!!One thing you must know: the local cache is used in the following cases:

    • When the cross-transactional cache has been deactivated (see the parameter Cache Inactive).
    • When the cache was deactivated for the InfoProvider (for all future queries) or the query
    • If you determine during runtime that caching cannot take place

    Main memory-> Objects inside in list or hirarchy display -> technical info (usage of selected cache)


    Check also buffer consumption under buffer monitor (Exp/ImpMem) and buffer overview (Exp./ Imp. SHM).

    Check for which query it does make sense to save them in the OLAP cache, recommendations from SAP:

    How often the query is requested

    We recommend that you save queries that are requested very frequently in the cache. Main memory cache is very fast, but limited in size. By displacing cached data, you can cancel out main memory limitations, but this also affects system performance. There are practically no limitations on the memory space available in the database or in the file system for the persistent cache. Accessing compressed data directly in the persistent cache also improves performance.

    The complexity of the query

    Caching improves performance for queries whose evaluation is more complex. We recommend that you keep complex data processed by the OLAP processor in the cache. (Therefore the cache mode Main Memory Without Swapping is less suitable for such queries.)

    How often data is loaded

    The cache does not provide an advantage if query-relevant data is frequently changed and therefore has to be loaded frequently, since the cache has to be regenerated every time. If cached data is kept in main memory, data from queries that are called frequently can be displaced, so that calling the data takes more time

    For detailed information which of the following modes should be used check sap help :

    • Cache is Inactive (0)
    • Main Memory Cache Without Swapping (1)
    • Main Memory Cache with Swapping (2)
    • Persistent Cache per Application Server (3)
    • Cross-Application Server Persistent Cache (4)
    • BLOB/Cluster Enhanced (5)

    You can configurate this settings in RSRT (see screenshot 3.1)

    3.1 RSRT performance info
    3.2 RSRCACHE - Queries in Main Memory (BLOB/Cluster Enhanced is deactivated)

    Use delta caching if possible. With this option you can avoid invalidation of the cache data when the data basis are changed (data loads / process chains). So only the new data are read from the DB.

    Hint: Prefilling the OLAP cache via broadcasting (rsa1->administration->broadcasting; documentation)

    4. System load Monitor ST03n

    ST03N (modi expert) -> click on BI system load to get data like:

    • Query runtimes (seperated BEx, BEx Web (ABAP / JAVA)
    • Process chain runtimes
    • DTP runtimes
    • Aggregate usage

    5. ST13 Analyze & Service Toolset (depends on your ST-A/PI level)


    there you can find some well known reports like RSECNOTE, but also new BI tools:

    BPSTOOLS
    BW-BPS Performance Toolset
    BIIPTOOLS
    BI-IP Performance Toolset
    BW_QUERY_ACCESSES
    BW: aggregate/InfoCube accesses of queries
    BW_QUERY_USAGE
    BW: query usage statistics
    BW-TOOLS
    BW Tools (PC Analyze, Request analyse, Aggregate toolset, IP Analyse, DTP request analyse and IO Usage)
    TABLE_ANALYSIS Table Analysis Tools

    These tools use all RSDD* tables/views and displays them in a colorful and sorted way.
    My favourites are BW-TOOLS, BW_QUERY_ACCESSES and BIIPTOOLS.

    Infocubes in sap bi 7.0 tablet

    6. ST14


    ST14 -> Business Warehouse -> plan analyze -> client 010 choose date , Basis Data (Top Objects) and Basis: Determine Top DB Objects and schedule it
    you will get a great analyze for your whole BI system, including

    • top 30 PSA, E-fact, F-fact, Dimension, master data tables, change logs, Cubes ODS/DSO, Aggregates and some special infos for BWA
    • for those who use oracle also Tables with more than 100 partitions
    • the upload performance for the last weeks
    • Compression rate
    • result of SAP_INFOCUBE_DESIGNS (D- and E-tables in relation to the F-tables)
    • ...
    6.1 ST14 Overview


    If you have trouble with the growth of your system this is a great entry point to start your analyze to find out where the space is gone ;)
    So you know now which requests should be compressed and how to get rid of partitions (maybe repartitioning; rsa1 -> administation -> repartitioning), but keep in mind that repartitioning creates shadow tables in namespace /BIC/4E<InfoCubename> and /BIC/4F<InfoCubename>.

    This tables are exists until the next repartitioning, so you can delete them after the repartitioning is completed. Locate and delete empty F-partitions via report SAP_DROP_EMPTY_FPARTITION (note 430486)

    7. Statistics

    TA: RSDDSTAT statistic recording (tracing) settings for for Infoprovider/queries etc.

    Views RSDDSTAT_OLAP (OLAP + Frontend statistics) RSDDSTAT_DM (multiprovider, aggregate-split, DB access time, rfc time)
    Use TA SE11 to view there content.
    Column AGGRAGATE to identify if it´s using aggregates or the BWA: aggregates are 1xxxxxx and BWA-Indizes with <InfoCube>$X

    How to delete statistics

    TA RSDDSTAT (manual deletion)
    setting up the tracelevel of queries and setting up deleletion of statistics
    automatical deletion
    Table RSADMIN Parameter TCT_KEEP_OLAP_DM_DATA_N_DAYS (DEFAULT 14 days)
    date is relating field Starttime in table RSDDSTATINFO

    8. ST02


    check every instance for swaps -> double click on the red marked lines and then click on current parameters and you will see which parameter you should increase.
    Please read the sap help for each parameter it could be that there are dependencies!
    (Memory and Buffer).

    There are two possible reasons for swapping:

    • There is no space left in the buffer data area -> buffer is too small
    • There are no directory entries left -> Although there is enough space left in the buffer, no further objects can be loaded because the number of directory entries is limited -> increase the needed parameter for the directory entries!

    Note : Before you change the settings, also have an eye on the pools via tool sappfpar! (on OS as sidadm: sappfpar check pf=<path-to-profile> )

    9. Using the BW Administration Cockpit

    Setup via SPRO (BI -> Seetings for BI Content -> Business Intelligence ->BI Adminstration Cockpit)


    Prerequisites:

    • min. NW 7.0 Portal Stack 5 + BI Administration package 1.0
    • implement technical content (TA: RSTCC_INST_BIAC)
    • Report RSPOR_SETUP


    Pros:

    • average and max. runtimes of queries
    • PC runtimes
    • trends for queries and bw-applications
    • suggestion for obsolet PSA data
    9.1 compressed and not compressed requests
    Infocubes In Sap Bi 7.0
    9.2 process chain status

    10. Optimizing performance of InfoProviders in summary

    • Compress InfoCubes
    • Partitioning (and repartitioning) of InfoCubes
    - DB level
    - range partitioning (only for data base system which can handle partitions, e.g. oracle, DB2, MSSQL)
    - clustering
    - application level

    11. ILM (Information Lifecycle Management)

    • nearline (Vendors for nearline Storage are e.g. SAND Technology, EMC², FileTek, PBS ...)
    • archiving (Archiving via fileserver or stape drives)
    • deletion of data


    Currently we don´t use any kind of ILM, but research is going on ;)

    12. BWA Business Warehouse Accelerator (just a small summary):

    • RSDDTREX_MEMORY_ESTIMATE (see screenshot)-> to estimate the memory consumption of the BWA for a specific InfoCube. That´s only the memory consumption and not the needed storage on the hard disk!
    • RSDDV Display all your Indizes which are indexed by the BWA
    • RSRV Analyze BW objects
    • RSDDBIAMON2 BWA Monitor
    • TREX_ADMIN_TOOL (standalone tool)
    • Tables RSDDSTATTREX and RSDDSTATTREXSERV for analyzing the runtimes of BWA
    • Table RSDDTREXDIR (Administration of the TREX Aggregates) , check this blog for more information


    1) Report: RSDDTREX_INDEX_LOAD_UNLOAD to load or delete BWA Indizes from the memory of the BWA servers. This can also be done over the RSRV ->Tests in Transaction RSRV -> BI Accelerator -> BI Accelerator Performance Checks -> Load BIA index data into main memory/Delete BIA index data from main memory.

    2) Optimize Rollup process with BWA-Delta-Index via RSRV (Tests in Transaction RSRV -> All Elementary Tests ->BI Accelerator ->BI Accelerator Performance Checks -> Propose Delta-Index for Indixes )
    Note that the Delta index growth with every load. The Delta index should not be bigger than 10% of the main index. If this is the case -> merge both indexes via report RSDDTREX_DELTAINDEX_MERGE

    3) Use the BWA/BIA Index Maintenance Wizard for DFI Support or the option 'Always keep all BIA index data in main store'. So they won´t be read from the disk, they stay always in memory! You can also activate and monitore DFI support via the trexadmin standalone tool. Control your memory consumption of BWA for this option!

    12.1 result of report RSDDTREX_MEMORY_ESTIMATE
    12.2 option index keep in memory via BWA/BIA Index Maintenance Wizard
    12.3 BWA suggestion for delta indexes (RSRV, see 12. 2) )

    13. Query analyzing example

    find out which queries have a long runtime over ST03n:


    13.1 ST03n - very high DB useage for this query

    Check list

    • how often data in this infoprovider were changed?
    • RSRT -> Performance Info -> any aggregates, cache (+delta) mode, compression?
    • which Infoprovider were hit by the query? RSRT -> Technical Information (in our case GRBCS_V11 - virtual cube and GRBCS_R11 - reporting cube)
    • DB statistics for this table/indexes up-to-date?
    • is it possible to index the Cube via BWA? (GRBCS_V11 can´t indexed because it is a virtual Cube, GRBCS_R11 is already indexed, the GRBCS_V11 includes GRBCS_M11 - a realtime infocube, which also can´t be indexed - and GRBCS_R11)
    • check where the most part of the runtime is spent (execute query in RSRT with options 'Display Statistic Data' and 'Do not use Cache')
    • check table RSDDIME if Line Item Dimension or High Cardinality used (if you not sure when you should use this features have a look below to the useful links)

    In this case I would activate the OLAP Cache (which mode depends on the how often the basis data are changed and if they are filled at the same time -> grouping for multiprovider, see point 2) and talk to my colleagues which are responsible for modeling if we can change something on the compression time frames. For more details you can also check table RSDDSTAT_DM.

    The high runtime causes also from a bug in the db statistics (results in a bad execution plan) which will be fixed in a merge fix (9657085 for PSU 1 and 10007936 for PSU2) for oracle 11g. (bug 9495669 see note 1477787)

    13.2 You can see a high usage of the data manager (part of the analytic engine) = read access to the Infoproviders. In this case read time of the DB.
    Infocubes

    14. General Hints

    1. Use high cardinality only where it makes sense! It could result in bad query performance. Use table RSDDIME to get an overview over all properties of your dimensions.
    2. Check in table RSRREPDIR (Field Cachemode) if for all queries cache and read mode 'H' are activated (take also care of the Delta-Cache). If you have special cases for some queries, don´t change your config. To change the read mode for all queries, call transaction RSRT -> type 'RALL' as 'OK code', and press 'Enter'. In the dialog box, choose the new read mode and press 'Enter'. To change the read mode for a specific query, enter the name of the query and select 'Read Mode'
    3. Tablespace PSAPTEMP should have minimum size of 2 times of your biggest F-fact table (e.g. we had some performance issues while executing some queries which are really took a lot of temp space in cause of aggregating and sorting, so now our temp space is 4 times bigger than our biggest F-table)
    4. Table RSTODSPART shows the amount of records per request
    5. BEx Information Broadcaster -> Fill OLAP-Cache via BEx Query Designer, BEx Analyzer, BEx Web Analyzer, WAD, Portal and BEx Report Designer (Scheduling on daily, weekly or monthly bases)
    6. All tables of an InfoCube can be listed with TA LISTSCHEMA.
    7. Report SAP_INFOCUBE_DESIGNS (Print a list of the cubes in the system and their layout)
    8. Delete PSA-tables in your process chains
    9. Delete Changelogs in your process chains
    10. check if your aggregates are wise or not (TA: RSMON -> Aggregates)
    11. Check SAP Note 1139396 and run reports SAP_DROP_TMPTABLES and SAP_UPDATE_DBDIFF to clean obsolete temporary entries.

    I hope I could give you some useful hints for your analyses. I appreciate any kind of feedback, improvements and own experiences. Be careful with compression and partitioning, just use it if you know what you are doing and what is happening with your data!!!

    May be I could show an old stager some new tables/transactions or some useful hints ;)

    Some useful links and documents:

    • BWA and BWA trexadmin standalone tool blog

    Book recommendation

    Jens Gleichmann SAP Basis administrator

    Please tell me what is your experience with performance tuning. Where is your starting point? Do you make any proactive tuning?
    Comment on this weblog

    Skip to end of metadataGo to start of metadata

    Before the technical upgrade

    1. Make sure that all transports in DEV system should be released and imported to all downstream systems QA and PRD systems.

    2. Check for Inconsistent Infoobjects and repair inconsistent Infoobjects as much as possible.

    3. Clean Up inconsistent PSA directory entries.

    4. Check consistency of PSA partitions.

    5. Check compounding consistency in Multiproviders.

    Right before the technical Upgrade procedure:

    1. Apply latest SPAM patch.
    2. Download most recent SP (Support Package ) stack and most recent BI Support pack. It is recommended to upgrade to the latest version of all relevant support packages during the upgrade.
    3. Check for the newest versions of SAP Notes for the Upgrade.
    4. Ensure that correct java runtime environment version is installed on sever.
    5. Ensure DB Statistics are uptodate prior to upgrade.
    6. Check for inactive update rules and transfer rules. All update and transfer rules should be active.
    7. Check for inactive Infocubes and agreegates. All Infocubes should be activated.
    8. Check for inactive Infoobjects. All Infoobjects should be activated.
    9. Check for inactive ODS objects. All ODS objects should be activated.
    10. Make sure all ODS data requests have been activated.
    11. Data load and other operational tasks i.e change run should not be executed while SAPup runs. So, reschedule Infopackages and process chains. SAPup automatically locks background jobs.
    12. Special consideration for modifications to time characteristics 0CURRENCY, 0UNIT, 0DATE, 0DATEFROM, 0DATETO, 0SOURCESYSTEM and 0TIME.
    13. For UNICODE systems special reports must be run. Execute reports RUTTTYPACT and UMG_POOL_TABLE.
    14. Complete any data mart data extractions and suspend any data mart extractions.
    15. Only 3.0 systems Run SAP_FACTVIEWS_RECREATE from SE38 transaction before running SAPup.
    16. Before execution PREPARE backup your system.

    Notes for Upgrade

    1. Review SAP notes 964418, 965386 and 934848, and plan to incorporate the installation of the new technical content into tasks performed following the

    technical upgrade procedure.

    2. Review note 849857 to prevent potential data loss in PSA/change log. Review note 856097 if issues are encountered with parttion.

    3. Review note 339889 to check PSA partition consistency.

    4. Review SAP note 920416 that discusses a potential issue with compounding in MultiProviders.

    Infocubes In Sap Bi 7.0 Cu.

    5. Review note 1013369 for a new intermediate SAP NetWeaver 7.0 BI ABAP Support Package strategy .

    6. Review note 449891 and also see note 883843 and 974639 to execute routine for deleting temporary BI database objects.

    7. Review note 449160 to Execute program RSUPGRCHECK and to locate any inactive update and transfer Rules .

    8. Review note 449160 to Execute program RSUPGRCHECK and to locate any inactive InfoCubes.

    9. Review note 449160 to Execute program RSUPGRCHECK and to locate any inactive InfoObjects.

    10. Review note 449160 and 861890 to Execute program RSUPGRCHECK to locate any inactive ODS objects.

    11. Refer to note 996602 If modifications have been made to these to time characteristics, 0CURRENCY, 0UNIT, 0DATE, 0DATEFROM, 0DATETO,
    0SOURSYSTEM, or 0TIME, create or locate a change request containing them sourced from the BI development system.
    This change request will be re-imported into not only the BI dev system, butalso any other systems following SAPup.

    12. Review notes 544623 and 813445 to run special reports for any UNICODE SAP system.

    13. See Note 506694 and 658992 for more info for SAP Service API (S-API), which is used for internal and BI data mart extraction, is upgraded during the upgrade. Therefore,
    the delta queues must be emptied prior to the upgrade to avoid any possibility of data loss .

    14. For Release NetWeaver 7.0, there is completely new workload statistics collector. This newly developed workload statistics collector is incompatible
    with earlier workload statistics data. In order to preserve the data for use after the upgrade, follow the steps in SAP notes 1005238 and 1006116 .

    15. For BW 3.0B systems: Execute report SAP_FACTVIEWS_RECREATE from SE38 before running SAPup, to prevent problems with the /BIC/V<Infocube>F
    fact views. For more information, see SAP Note 563201 .