Knowledge Base

Find answers to common questions about Cloudmersive products and services.



Cloudmersive Private Cloud AI Server GPU and Hardware Requirements
8/24/2025 - Cloudmersive Support


Cloudmersive Private Cloud AI Server is the common AI base platform used by Cloudmersive AI APIs. It is required when utilizing most Cloudmersive AI APIs.

Cloudmersive Private Cloud AI Server has specific hardware requirements to run:

  • CPU: 4 Cores Minimum
  • RAM: 128 GB Minimum
  • GPU: 1 NVIDIA L4 or A10 24 GB GPU RAM Minimum
  • Disk: 500 GB SSD
  • Operating System: Linux - Red Hat Enterprise Linux (RHEL) 10, Debian 11, or Ubuntu Server 24.04

For faster performance, customers can consider increasing the GPU to:

  • GPU: 1 NVIDIA L40 48 GB GPU RAM

For fastest performance and throughput, customers can increase the GPU to:

  • GPU: 1 NVIDIA A100 80 GB GPU RAM

These guidelines also apply to Cloudmersive Managed Instance.

Cloud Deployment Guidelines

Microsoft Azure

  • Standard_NV18ads_A10_v5 — 18 vCPU, 220 GB RAM, 1× A10 24 GB
  • Standard_NC24ads_A100_v4 — 24 vCPU, 220 GiB RAM, 1× A100 80 GB

Amazon Web Services

  • gr6.4xlarge — 16 vCPU, 128 GiB RAM, 1× L4

Google Cloud Platform

  • g2-standard-32 — 32 vCPU, 128 GB RAM, 1× L4 (24 GB)
  • a2-ultragpu-1g — 12 vCPU, 170 GB RAM, 1× A100 80 GB

Oracle Cloud Infrastructure

  • VM.GPU.A10.1 — 15 OCPUs (~30 vCPU), 240 GB RAM, 1× A10 24 GB
  • VM.GPU.A100.80G.1 — 1× A100 80 GB

On-Premises Server Guidelines

Dell

  • PowerEdge R660 (1U)
  • PowerEdge R760 (2U)
  • PowerEdge R760xa

HPE

  • ProLiant DL320 Gen11 (1U)
  • ProLiant DL380 Gen11 (2U)
  • ProLiant DL380 Gen10 Plus (2U)
  • ProLiant DL580 Gen10 (4U)

Frequently Asked Questions

Are GPUs from other manufacturers (e.g. AMD, Google TPU, etc.) supported?

Not at this time. Currently Cloudmersive Private Cloud requires NVIDIA GPUs due to CUDA and other architectural features.

Are multiple GPUs supported?

Yes, you can either scale up by having multiple GPUs in one server, or scale out by having multiple servers each with 1 GPU. We recommend a symmetrical deployment, i.e. all servers have the same hardware configuration.

Can different size GPUs be used for pre-production and production?

Yes, you can use more cost-efficient GPUs in pre-production (e.g. L4) and higher power GPUs in production (e.g. A100).

600 free API calls/month, with no expiration

Get started now! or Sign in with Google

Questions? We'll be your guide.

Contact Sales