SAP Business Objects (BO) is a reporting and analytics tool used to visualize data in different forms like charts, graphs, tabular reports, etc. The tool makes reporting and analysis simple and easy for business users who can not only generate reports but also perform processes such as predictive analytics without the assistance of a data analyst for input.
Users, however, often complain about the performance of the tool. This is because BO doesn’t store data on its own and RDBMS is queried repeatedly for data. This means the performance of the BO reports is directly dependent on the written SQL query and its execution.
Users can perform a number of things to tune the performance of SAP BO reports, which are the topic of discussion in this blog post. Let’s take a look.
Performance Issues in BO Reports
One of the common issues associated with SAP BO is performance tuning of reports, which significantly hurts the performance of the tool and results in the following difficulties:
- Extremely slow operations, which results in BO reports being timed out before the query is executed and the data is refreshed.
- The list of values often take time to load.
- Reports are generated very slowly and are often loaded with a partial result set only.
Analyzing the Performance Issues in BO Reports
BO reports encounter performance related issues at Visualization/ Reporting, Semantic, Database or Server Layer. We can do a cause analysis of the performance issues through the following steps:
- Executing queries at report level in the database.
- If the query takes less time to execute than the report, then tuning is required at Report / Universe/ Server level.
- If the query and the report execute at the same time or query takes more time to execute than the report, then tuning is required at the database level.
Performance Tuning in BO Reports
We can increase the performance of BO reports at four different levels. Let’s discuss each in brief.
- Remove unused variables or minimize the number of report variables. Unused variables increase the calculation time.
- Avoid creating LOVs on datasets having a large number of distinct values especially date LOVs. This is because having distinct values reduces the load while refreshing the report. Selecting values for objects like date increases the loading time.
- Strategically use prompts in the report so that it does not refresh for whole data set; instead, it refreshes only for the prompts supplied by the user. This will ensure only relevant data is provided to the user and lower the load on the processing server.
- Try to execute all complex calculations at ETL level or DB level.
- Modify array fetch parameters in the universe parameters. If the network allows sending large array, then set an array to fetch parameter at a larger value. It will help in drastically improving the performance of the report. Large array size will reduce the number of time BO will hit the DB, however, ensure there’s enough spool space allocated on the DB server.
- Use aggregate tables and functions. Aggregated tables will save time for intermediate aggregations that happen at report runtime.
- Leverage index awareness, as assigning primary indexes to the tables will help execute searches through the tables.
- Use shortcut joins to accumulate a large amount of data in a single query.
- Do not use derived tables until there is no other option left. Derive tables are created at the Universe level and use up the BO server space. Also, the calculations performed on these tables will directly impact the performance of the BO. Instead, create contexts.
- Enable query stripping as all the unused objects in the query would be stripped and only the objects used in the report will hit DB for values. Do not include any object that is used in the intermediate calculation but not used in the report, as it will throw an error.
- Schedule reports instead of manually refreshing them, because scheduling happens in Job server and not processing server, which is less occupied in most cases.
- Employ event-based scheduling and check when the load on the server is less.
- Set the connection timeout limit to the maximum.
- Set a time span when an idle connection should be closed.
- Clone the server.
- Disable the search option as it traverses through the complete BO repository and then returns the results.
- Generate an explain plan for the BO generated queries in DB level.
- Use table partitioning at the database level.
- Replace null values with dummy values like Zero and avoid using ‘IS null’ operators in the queries.
- Leverage indexing at DB level.
SAP Business Objects (BO) can play a crucial role in the growth of the organization. It empowers the project owners and managers with vital data, which enables them to take strategic calls on business proceedings. This makes the performance of the reporting and analytics tool even more decisive. If you are experiencing any lag in the performance of BO reports, do leverage the information shared in the blog post. We hope these techniques will go a long way in fine-tuning the performance of SAP Business Objects reports.
This is all from us. Let us know your views in the comments below.
Until next time!
- SAP Blockchain-as-a-Service (BaaS)
SAPs Blockchain-as-a-Service (BaaS) is built on its HANA Cloud Platform (HCP). It provides the simplest and least-risky medium to experiment with the distributed ledger technology in the cloud. BaaS helps…
- A Know-how of Timesheet Management via Chatbot
SAP Timesheet Management using Chatbot is an application, which helps users in managing their time entries quickly and efficiently in real-time through mobile phones. Chatbot is an AI-powered computer program…
- Insight into SAP Fiori and Screen Personas
SAP Fiori is a newly launched user experience (UX) for SAP applications. It is a next-gen UX, based on modern design principles, that elevates the overall look and usability of…
- Column Store Index in SQL Server 2012
This post is about the new feature, i.e., Column Store Index which is available since SQL 2012 version. Microsoft has released column store index to improve the performance by 10x.…
- HAWQ/HDB and Hadoop with Hive and HBase
Hive: Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. HBase: Apache HBase™ is the Hadoop database, a distributed, scalable, big…
- KAFKA-Druid Integration with Ingestion DIP Real Time Data
The following blog explains how we can leverage the power of Druid to ingest the DIP data into Druid (a high performance, column oriented, distributed data store), via Kafka Tranquility…