Database and application sizing guidelines

This topic provides general recommendations for sizing the database on a general level.

Database storage

The primary driver for database size are the documents and the document-related features enabled in the database.

  • Number of users: Number of users drives the use log usage.
  • Number of cases: Expected number of cases.
  • Number of documents: Expected number of documents in the archive.
  • Number of processes: Expected number of processes.
  • Average size of documents: Average size across all document types.
  • Free text index enabled: If free text indexing is enabled, it would be 1 for no free text and 1.5 for text indexing.
  • Rendition activated on documents: This means that PDF Crawler generates PDFs for all documents. The value would be 1 for no renditions and 2 for renditions.
  • Version settings enabled on documents: If version settings are enabled, it will increase the storage needs as well.
    • Average versions per versioned document: It can be between 0 – 100. Estimate your average number of versions.
    • Percentage of versioned documents: Many documents will never be versioned, for example, archived documents from external sources. Therefore, estimate a percentage of documents that will be versioned.
  • Expected yearly growth: To prepare the database for growth, you can calculate the extends needed for an on-prem database on a yearly basis.

Database CPU

The cost driver for databases is CPU. Calculating the CPU needs for a WorkZone database can be difficult due to the different workloads that you can activate in WorkZone.

The CPU output can differ based on platform chosen for running the database.

Important: Workflows with automation can have a different performance pattern on both the database and the application. Because of that, they should be calculated differently.

CPU estimates for Oracle ADB

For Oracle ADB, the cost factor is ECPU. The advantage of the ADB ECPU is that it is scalable, which allows you to make a more dynamic setup and in a higher degree pay for what you use. For average workloads, the current rule of thumb is that we can service up to 2-300 users on 2 ECPUs. Configuring that with auto-scaling that will allow you to run with peak workloads as well.

Then, you can calculate the steps from same type of workloads 4 ECPUs equals 4-600 users, and so forth. It is important to monitor the actual load, so you do not overprovision the CPU, but this should provide you with a base idea of the costs.

Note: One Oracle OCI OCPU equals 4 ECPUs. The minimum ECPU that can be used for an Oracle ADB instance, is 2 ECPUs.

CPU Estimates for Oracle EE and SE

For Oracle EE and SE, the cost fact is OCPU. The challenge with this model is that you have to estimate the expected OCPU usage to the roof, or expected max usage. Also, the OCPU performance can differ based on the Oracle Platform in use. For average workloads on this model, the current rule of thumb is 1 OCPU equals max 3-400 users.

Note: One Oracle on-premises OCPU equals 2 vCPUs for x86-based.

Application Kubernetes sizing guidelines

The sizing of your WorkZone Kubernetes instance is important for ensuring availability and scalability of your workload.

Installation using the full master Helm chart including all WorkZone Pods has the following requirements for storage, CPU and Memory.

Requirements for running all WorkZone Pods

Node type Number of Pods Storage required* CPU required* Memory required*
Linux node(s)

8 WorkZone Pods

72 Control Pods

200 GB 8 vCPUs 32 GiB
Windows node(s) 54 Pods 400 GB 12 vCPUs 48 GiB

*Numbers are approximate and guiding. They will fluctuate, as Pods change in size and requirements.

Requirements for scalable containers

Several of the containers are scalable. Depending on the workloads serviced by the Kubernetes instance, the pattern of scaling can differ. This could be workloads with intense automation causing the Process-related containers to require scaling, or workloads that generate many documents causing the PDF service to require scaling, or workloads with heavy integrations that require scaling of the OData service, and, finally, heavy user interaction causing the Client to require scaling.

The patterns for scaling containers must be an assessment of the individual workloads, preferably by using a platform with auto-scaling capabilities and monitoring the behavior of the workloads.

For more detailed sizing, you must assess the expected behavior of the instances and make a more granular calculation. See WorkZone containers.

Sizing guidelines for nodes

Depending on the purpose, workloads and availability of the WorkZone instance, there are different sizing guidelines.

Basic workloads (Dev and Test)

For Basic workloads for testing or development and no expectations of availability, use the Requirements for running all WorkZone Pods table above.

High availability workloads (small production)

With workloads that require high availability, but have a small production workload, there is a need for being able to do, for example, rolling upgrades. Therefore, the sizing needs to consider the extra CPU and memory required for running two Pods in the upgrade flow. For such workloads the guidance is to double the requirements from the Requirements for running all WorkZone Pods table above.

High scalability (medium to large production and workloads)

Calculating for scalability is difficult, which pods will require scaling with the workloads expected. We suggest a simplified calculation model. A general recommendation is to monitor auto-scaling behavior and adjust accordingly to add more nodes to your cluster.

One model is to start with the numbers calculated for high availability above, make an assessment of the expected scaling, and then add sizing from the Requirements for scalable containers for the expected scale. This should provide you with adequate room for scaling.

Node strategy

Currently WorkZone requires both Linux and Windows nodes. This puts requirements on the orchestration platform, as well as the node sizing strategy.

Depending on the expected use and workload of the WorkZone instance that you are deploying, the node strategy can vary. In general, less nodes lower the guaranteed availability, and more nodes provide more availability.

  • For dev and test workloads, you can argue node sizes and numbers, as these are less critical and require less availability.
  • For production workloads, the recommendation is of minimum 3 nodes of each OS. Meaning 3 Linux and 3 Windows nodes.

You can use the sizing guidelines above to decide the collected node sizes.

Using the sizing guidelines, you can calculate the total needed storage, memory and CPU needed for both the Linux and Windows nodes. Considering the number of nodes, you should as a minimum allow that at least one node can fail based on your sizing.

Important: These are examples only. Actual recommendations for your setup can vary: You might need to run multiple WorkZone instances in the same cluster, or have other workloads than WorkZone in the same cluster, or require a different availability.