Hana Database Size

SAP HANA is useful as it's very fast due to all data loaded in-Memory and no need to load data from disk. SAP HANA can be used for the purpose of OLAP (On-line analytic) and OLTP (On-Line Transaction) on a single database. SAP HANA Database consists of a set of in-memory processing engines. SAP HANA is an in-memory, column-oriented, relational database management system developed and marketed by SAP SE. Its primary function as a database server is to store and retrieve data as requested by the applications. Since the introduction of the SAP HANA database in 2008, our experience indicates SAP HANA Runtime is the more purchased database. This is also the more restrictive database since customers can only leverage it for SAP applications and the underlying SAP and non-SAP data can only be loaded, exported, and managed via SAP technologies.

  1. Hana Database Size History
  2. Hana Database Size Limit
  3. Hana Db Swap Size
-->

Azure NetApp Files provides native NFS shares that can be used for /hana/shared, /hana/data, and /hana/log volumes. Using ANF-based NFS shares for the /hana/data and /hana/log volumes requires the usage of the v4.1 NFS protocol. The NFS protocol v3 is not supported for the usage of /hana/data and /hana/log volumes when basing the shares on ANF.

Important

The NFS v3 protocol implemented on Azure NetApp Files is not supported to be used for /hana/data and /hana/log. The usage of the NFS 4.1 is mandatory for /hana/data and /hana/log volumes from a functional point of view. Whereas for the /hana/shared volume the NFS v3 or the NFS v4.1 protocol can be used from a functional point of view.

Important considerations

When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware of the following important considerations:

  • The minimum capacity pool is 4 TiB
  • The minimum volume size is 100 GiB
  • Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes are mounted, must be in the same Azure Virtual Network or in peered virtual networks in the same region
  • It is important to have the virtual machines deployed in close proximity to the Azure NetApp storage for low latency.
  • The selected virtual network must have a subnet, delegated to Azure NetApp Files
  • Make sure the latency from the database server to the ANF volume is measured and below 1 millisecond
  • The throughput of an Azure NetApp volume is a function of the volume quota and Service level, as documented in Service level for Azure NetApp Files. When sizing the HANA Azure NetApp volumes, make sure the resulting throughput meets the HANA system requirements
  • Try to “consolidate” volumes to achieve more performance in a larger Volume for example, use one volume for /sapmnt, /usr/sap/trans, … if possible
  • Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read Only, etc.).
  • Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
  • The User ID for sidadm and the Group ID for sapsys on the virtual machines must match the configuration in Azure NetApp Files.

Important

For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity.

Important

If there is a mismatch between User ID for sidadm and the Group ID for sapsys between the virtual machine and the Azure NetApp configuration, the permissions for files on Azure NetApp volumes, mounted to the VM, would be be displayed as nobody. Make sure to specify the correct User ID for sidadm and the Group ID for sapsys, when on-boarding a new system to Azure NetApp Files.

Sizing for HANA database on Azure NetApp Files

The throughput of an Azure NetApp volume is a function of the volume size and Service level, as documented in Service level for Azure NetApp Files.

Important to understand is the performance relationship the size and that there are physical limits for an LIF (Logical Interface) of the SVM (Storage Virtual Machine).

The table below demonstrates that it could make sense to create a large “Standard” volume to store backups and that it does not make sense to create a “Ultra” volume larger than 12 TB because the physical bandwidth capacity of a single LIF would be exceeded.

The maximum throughput for a LIF and a single Linux session is between 1.2 and 1.4 GB/s.

SizeThroughput StandardThroughput PremiumThroughput Ultra
1 TB16 MB/sec64 MB/sec128 MB/sec
2 TB32 MB/sec128 MB/sec256 MB/sec
4 TB64 MB/sec256 MB/sec512 MB/sec
10 TB160 MB/sec640 MB/sec1.280 MB/sec
15 TB240 MB/sec960 MB/sec1.400 MB/sec
20 TB320 MB/sec1.280 MB/sec1.400 MB/sec
40 TB640 MB/sec1.400 MB/sec1.400 MB/sec

It is important to understand that the data is written to the same SSDs in the storage backend. The performance quota from the capacity pool was created to be able to manage the environment.The Storage KPIs are equal for all HANA database sizes. In almost all cases, this assumption does not reflect the reality and the customer expectation. The size of HANA Systems does not necessarily mean that a small system requires low storage throughput – and a large system requires high storage throughput. But generally we can expect higher throughput requirements for larger HANA database instances. As a result of SAP's sizing rules for the underlying hardware such larger HANA instances also provide more CPU resources and higher parallelism in tasks like loading data after an instances restart. As a result the volume sizes should be adopted to the customer expectations and requirements. And not only driven by pure capacity requirements.

As you design the infrastructure for SAP in Azure you should be aware of some minimum storage throughput requirements (for productions Systems) by SAP, which translate into minimum throughput characteristics of:

Volume type and I/O typeMinimum KPI demanded by SAPPremium service levelUltra service level
Log Volume Write250 MB/sec4 TB2 TB
Data Volume Write250 MB/sec4 TB2 TB
Data Volume Read400 MB/sec6.3 TB3.2 TB

Since all three KPIs are demanded, the /hana/data volume needs to be sized toward the larger capacity to fulfill the minimum read requirements.

For HANA systems, which are not requiring high bandwidth, the ANF volume sizes can be smaller. And in case a HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are defined for backup volumes. However the backup volume throughput is essential for a well performing environment. Log – and Data volume performance must be designed to the customer expectations.

Important

Independent of the capacity you deploy on a single NFS volume, the throughput, is expected to plateau in the range of 1.2-1.4 GB/sec bandwidth leveraged by a consumer in a virtual machine. This has to do with the underlying architecture of the ANF offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the article Performance benchmark test results for Azure NetApp Files were conducted against one shared NFS volume with multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP. Where we measure throughput from a single VM against an NFS volume. Hosted on ANF.

To meet the SAP minimum throughput requirements for data and log, and according to the guidelines for /hana/shared, the recommended sizes would look like:

VolumeSize
Premium Storage tier
Size
Ultra Storage tier
Supported NFS protocol
/hana/log/4 TiB2 TiBv4.1
/hana/data6.3 TiB3.2 TiBv4.1
/hana/shared scale-upMin(1 TB, 1 x RAM)Min(1 TB, 1 x RAM)v3 or v4.1
/hana/shared scale-out1 x RAM of worker node
per 4 worker nodes
1 x RAM of worker node
per 4 worker nodes
v3 or v4.1
/hana/logbackup3 x RAM3 x RAMv3 or v4.1
/hana/backup2 x RAM2 x RAMv3 or v4.1

For all volumes, NFS v4.1 is highly recommended

The sizes for the backup volumes are estimations. Exact requirements need to be defined based on workload and operation processes. For backups, you could consolidate many volumes for different SAP HANA instances to one (or two) larger volumes, which could have a lower service level of ANF.

Size

Note

The Azure NetApp Files, sizing recommendations stated in this document are targeting the minimum requirements SAP expresses towards their infrastructure providers. In real customer deployments and workload scenarios, that may not be enough. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.

Therefore you could consider to deploy similar throughput for the ANF volumes as listed for Ultra disk storage already. Also consider the sizes for the sizes listed for the volumes for the different VM SKUs as done in the Ultra disk tables already.

Tip

You can re-size Azure NetApp Files volumes dynamically, without the need to unmount the volumes, stop the virtual machines or stop SAP HANA. That allows flexibility to meet your application both expected and unforeseen throughput demands.

Documentation on how to deploy an SAP HANA scale-out configuration with standby node using NFS v4.1 volumes that are hosted in ANF is published in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server. Iota ableton.

Availability

ANF system updates and upgrades are applied without impacting the customer environment. The defined SLA is 99.99%.

Volumes and IP addresses and capacity pools

With ANF, it is important to understand how the underlying infrastructure is built. A capacity pool is only a structure, which makes it simpler to create a billing model for ANF. A capacity pool has no physical relationship to the underlying infrastructure. If you create a capacity pool only a shell is created which can be charged, not more. When you create a volume, the first SVM (Storage Virtual Machine) is created on a cluster of several NetApp systems. A single IP is created for this SVM to access the volume. If you create several volumes, all the volumes are distributed in this SVM over this multi-controller NetApp cluster. Even if you get only one IP the data is distributed over several controllers. ANF has a logic that automatically distributes customer workloads once the volumes or/and capacity of the configured storage reaches an internal pre-defined level. You might notice such cases because a new IP address gets assigned to access the volumes.

##Log volume and log backup volumeThe “log volume” (/hana/log) is used to write the online redo log. Thus, there are open files located in this volume and it makes no sense to snapshot this volume. Online redo logfiles are archived or backed up to the log backup volume once the online redo log file is full or a redo log backup is executed. To provide reasonable backup performance, the log backup volume requires a good throughput. To optimize storage costs, it can make sense to consolidate the log-backup-volume of multiple HANA instances. So that multiple HANA instances leverage the same volume and write their backups into different directories. Using such a consolidation, you can get more throughput with since you need to make the volume a bit larger.

The same applies for the volume you use write full HANA database backups to.

Hana database size querySap hana database size limit

Backup

Besides streaming backups and Azure Back service backing up SAP HANA databases as described in the article Backup guide for SAP HANA on Azure Virtual Machines, Azure NetApp Files opens the possibility to perform storage-based snapshot backups.

SAP HANA supports:

  • Storage-based snapshot backups from SAP HANA 1.0 SPS7 on
  • Storage-based snapshot backup support for Multi Database Container (MDC) HANA environments from SAP HANA 2.0 SPS4 on

Creating storage-based snapshot backups is a simple four-step procedure,

  1. Creating a HANA (internal) database snapshot - an activity you or tools need to perform
  2. SAP HANA writes data to the datafiles to create a consistent state on the storage - HANA performs this step as a result of creating a HANA snapshot
  3. Create a snapshot on the /hana/data volume on the storage - a step you or tools need to perform. There is no need to perform a snapshot on the /hana/log volume
  4. Delete the HANA (internal) database snapshot and resume normal operation - a step you or tools need to perform

Warning

Missing the last step or failing to perform the last step has severe impact on SAP HANA's memory demand and can lead to a halt of SAP HANA

This snapshot backup procedure can be managed in a variety of ways, using various tools. One example is the python script “ntaphana_azure.py” available on GitHub https://github.com/netapp/ntaphanaThis is sample code, provided “as-is” without any maintenance or support.

Caution

A snapshot in itself is not a protected backup since it is located on the same physical storage as the volume you just took a snapshot of. It is mandatory to “protect” at least one snapshot per day to a different location. This can be done in the same environment, in a remote Azure region or on Azure Blob storage.

For users of Commvault backup products, a second option is Commvault IntelliSnap V.11.21 and later. This or later versions of Commvault offer Azure NetApp Files Support. The article Commvault IntelliSnap 11.21 provides more information.

Idm cd key. To configure these keys you need to open 'Options-General' IDM dialog and press 'Keys'. Then enable 'Use the following key(s) to prevent downloading with IDM for any links:' and 'Use the following key(s) to force downloading with IDM for any links:' options and select keys or combinations of keys. หมายเลขซีเรียล IDM คืออะไร. Internet Download Manager เป็นโปรแกรมช่วยดาวน์โหลดที่ถือว่าดีที่สุดในโลกนี้แล้ว เชื่อว่าหลายท่านก็กำลังใช้อยู่ เป็นโปรแกรมที่. Microsoft Office Microsoft Office Home & Business 2019 PC/Mac bind Microsoft account US $60.25 Product Activation Request (we provide your product key) Buy now Home / Products tagged “IDM” Filter. IDM serial key allows their users to have the opportunity that other software does not grant their users. They make them enjoy internet survey at a higher rate. You don’t need to complain about not being able to download again. IDM serial key has made everything easier for you.

Back up the snapshot using Azure blob storage

Back up to Azure blob storage is a cost effective and fast method to save ANF-based HANA database storage snapshot backups. To save the snapshots to Azure Blob storage, the azcopy tool is preferred. Download the latest version of this tool and install it, for example, in the bin directory where the python script from GitHub is installed.Download the latest azcopy tool:

The most advanced feature is the SYNC option. If you use the SYNC option, azcopy keeps the source and the destination directory synchronized. The usage of the parameter --delete-destination is important. Without this parameter, azcopy is not deleting files at the destination site and the space utilization on the destination side would grow. Create a Block Blob container in your Azure storage account. Then create the SAS key for the blob container and synchronize the snapshot folder to the Azure Blob container.

For example, if a daily snapshot should be synchronized to the Azure blob container to protect the data. And only that one snapshot should be kept, the command below can be used.

Next steps

Hana Database Size

Read the article:

The Myths Debunked (Part 4)

Hana Database Size History

Flexibility is crucial for business. But many enterprises are not sure about the degree of scalability that SAP HANA can offer them. The final blog post in this series will reveal the facts behind the memory size limits of the in-memory database platform.

The Myth: SAP HANA Scalability Is Limited

As an enterprises grows, it is important it has scalable solutions that can meet the rising demands. But there are limits to the amount of data that one server can hold. In the early days, there were numerous concerns about the scalability of SAP HANA due to restrictions on server sizes. And some of these doubts still persist today. Businesses fear being ‘locked in’ to a limited database size as it could lead to difficulties if their needs outgrow the capacity.

The Truth: Memory Size Limits Are No Longer an Issue

In fact, there is now very little truth to this myth. Depending on the application, SAP HANA offers both vertical and horizontal scalability. The maximum RAM for single systems is constantly
growing, even as you read this. Currently, it allows enterprises to scale up to 12TB on Intel x86 technology.

With virtualization businesses have a highly flexible and easy-to-manage option, although at present this flexibility comes with some sizing limitations. But even virtual environments are
growing rapidly. The IBM Power platform, just recently released for SAP HANA by SAP, adds even more dynamics to this development with virtual instance support up to 4TB, and more to come.
So, even present capacities are already sufficient for the vast majority of implementations.

In cases when a single server is not sufficient, however, SAP HANA can also be scaled out. This means multiple servers can be operated in a cluster to tackle much larger volumes of data.

While SAP S/4HANA and ERP on SAP HANA are still restricted to one machine, there are virtually no limits for SAP Business Warehouse. Currently, the largest certified configuration is a cluster of 56 servers, offering a colossal 168TB RAM. And even larger configurations are possible.

Finally, technologies such as SAP IQ, Apache Hadoop, or SAP HANA Vora provide ways to seamlessly integrate additional storage capacity for near-line and archive data. This is the foundation
for the storage required to leverage the Internet of Things and Industry 4.0 and the mass of data that they will bring.

Hana Database Size Limit

Room for Growth

The result is that SAP HANA can in fact be scaled up or scaled out in line with requirements. Although there are limits to maximum memory size, these are more than sufficient for almost any project. Enterprises can rest assured that they will have the storage to match their needs, no matter how large they become.

Hana Db Swap Size

Review the third debunked myth on SAP HANA complexity, for the following:

  • If SAP HANA Migration Turns Businesses Upside Down
  • If SAP HANA implementation involves Less Risk and Effort than You Think
  • How Companies can get support for the Switch