aws elb what to put in host header

This certificate introduces the concepts that you demand to sympathize to configure Google Cloud external HTTP(South) Load Balancing.

External HTTP(Due south) Load Balancing is a proxy-based Layer seven load balancer that enables you to run and scale your services behind a single external IP address. External HTTP(Southward) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on a multifariousness of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and so on), also as external backends connected over the cyberspace or via hybrid connectivity. For details, come across Use cases.

Modes of operation

You tin configure External HTTP(S) Load Balancing in the post-obit modes:

  • Global external HTTP(S) load balancer. This is a global load balancer that is implemented as a managed service on Google Front Ends (GFEs). It uses the open-source Envoy proxy to support avant-garde traffic direction capabilities such as traffic mirroring, weight-based traffic splitting, request/response-based header transformations, and more. This load balancer is currently in Preview.
  • Global external HTTP(S) load balancer (archetype). This is the classic external HTTP(S) load balancer that is global in Premium Tier merely tin can be configured to exist regional in Standard Tier. This load balancer is implemented on Google Forepart Ends (GFEs). GFEs are distributed globally and operate together using Google's global network and command plane.
  • Regional external HTTP(S) load balancer. This is a regional load balancer that is implemented as a managed service on the open-source Envoy proxy. Information technology includes advanced traffic management capabilities such as traffic mirroring, weight-based traffic splitting, asking/response-based header transformations, and more. This load balancer is currently in Preview.
Load balancer mode Recommended employ cases Capabilities
Global external HTTP(Southward) load balancer (Preview) Apply this load balancer for external HTTP(S) workloads with globally dispersed users or backend services in multiple regions.
  • Supports advanced traffic management
  • Global Anycast external IP addresses over Premium Tier
  • Tin access backends beyond multiple regions
  • Supports Cloud CDN and Google Cloud Armor
  • Doesn't back up GKE
Global external HTTP(S) load balancer (classic)

This load balancer is global in Premium Tier merely can exist configured to be effectively regional in Standard Tier.

In the Premium Network Service Tier, this load balancer offers multi-region load balancing, directing traffic to the closest healthy backend that has capacity and terminates HTTP(S) traffic as close as possible to your users.

In the Standard Network Service Tier, load balancing is handled regionally.

  • Supports GKE
  • Fewer traffic routing features.
See the Load balancing features page for the full list of capabilities.
Regional external HTTP(Due south) load balancer (Preview)

This load balancer contains many of the features of the existing global external HTTP(Due south) load balancer (classic), along with additional advanced traffic management capabilities.

Use this load balancer if you desire to serve content from only one geolocation (for example, to meet compliance regulations) or if the Standard Network Service Tier is desired.

  • Supports advanced traffic direction capabilities
  • Regional VIPs using Standard Network Tier
  • Terminates TLS in a single region you configure
  • Serves content from the configured region just
For the complete list, see Load balancing features.

Identifying the mode

Cloud Console

  1. In the Cloud Console, go to the Load balancing page.
    Go to Load balancing
  2. In the Load Balancers tab, the load balancer blazon, protocol, and region are displayed. If the region is blank, so the load balancer is global. The following tabular array summarizes how to identify the fashion of the load balancer.
    Load balancer fashion Load balancing type Region Network tier
    Global external HTTP(S) load balancer (Preview) HTTP(S) PREMIUM
    Global external HTTP(S) load balancer (archetype) HTTP(S)(Classic) STANDARD or PREMIUM
    Regional external HTTP(S) load balancer (Preview) HTTP(South) Specifies a region STANDARD

gcloud

  1. To make up one's mind the mode of a load balancer, run the following command:

    gcloud compute forwarding-rules draw                  FORWARDING_RULE_NAME                

    In the command output, bank check the load balancing scheme, region, and network tier. The following table summarizes how to identify the mode of the load balancer.

    Load balancer fashion Load balancing scheme Forwarding rule Network tier
    Global external HTTP(S) load balancer (Preview) EXTERNAL_MANAGED Global PREMIUM
    Global external HTTP(Due south) load balancer (classic) EXTERNAL Global STANDARD or PREMIUM
    Regional external HTTP(S) load balancer (Preview) EXTERNAL_MANAGED Specifies a region STANDARD

Architecture

The post-obit resources are required for an HTTP(S) Load Balancing deployment:

  • For regional external HTTP(Due south) load balancers only, a proxy-only subnet is used to send connections from the load balancer to the backends.

  • An external forwarding rule specifies an external IP address, port, and target HTTP(Due south) proxy. Clients utilise the IP accost and port to connect to the load balancer.

  • A target HTTP(S) proxy receives a request from the client. The HTTP(Due south) proxy evaluates the request by using the URL map to make traffic routing decisions. The proxy can also authenticate communications by using SSL certificates.

    • For HTTPS load balancing, the target HTTPS proxy uses SSL certificates to prove its identity to clients. A target HTTPS proxy supports up to the documented number of SSL certificates.
  • The HTTP(S) proxy uses a URL map to make a routing decision based on HTTP attributes (such as the request path, cookies, or headers). Based on the routing decision, the proxy forwards client requests to specific backend services or backend buckets. The URL map tin specify additional actions, such as sending redirects to clients.

  • A backend service distributes requests to healthy backends. The global external HTTP(S) load balancers also back up backend buckets.

    • One or more than backends must be connected to the backend service or backend saucepan.
  • A health check periodically monitors the readiness of your backends. This reduces the risk that requests might be sent to backends that tin can't service the request.

  • Firewall rules for your backends to take health check probes. Regional external HTTP(Southward) load balancers require an additional firewall rule to allow traffic from the proxy-only subnet to reach the backends.

Global

This diagram shows the components of a global external HTTP(S) load balancer deployment. This architecture applies to both, the global external HTTP(S) load balancer with advanced traffic management adequacy, and the global external HTTP(S) load balancer (classic) in Premium Tier.

Global external HTTP(S) load balancer components
Global external HTTP(S) load balancer components

Regional

This diagram shows the components of a regional external HTTP(S) load balancer deployment.

Regional external HTTP(S) load balancer components
Regional external HTTP(S) load balancer components

Proxy-just subnet

Proxy-only subnets are only required for regional external HTTP(S) load balancers.

The proxy-simply subnet provides a ready of IP addresses that Google uses to run Envoy proxies on your behalf. You must create one proxy-only subnet in each region of a VPC network where you apply regional external HTTP(Southward) load balancers. The --purpose flag for this proxy-only subnet is ready to REGIONAL_MANAGED_PROXY. All regional external HTTP(South) load balancers in the same region and VPC network share a puddle of Envoy proxies from the aforementioned proxy-only subnet. Further:

  • Proxy-only subnets are merely used for Envoy proxies, not your backends.
  • Backend VMs or endpoints of all regional external HTTP(Southward) load balancers in a region and VPC network receive connections from the proxy-only subnet.
  • The IP address of the regional external HTTP(Southward) load balancer is not located in the proxy-only subnet. The load balancer'south IP accost is defined by its external managed forwarding rule, which is described below.

Forwarding rules and addresses

Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy, URL map, and one or more backend services.

Each forwarding rule provides a single IP address that can exist used in DNS records for your awarding. No DNS-based load balancing is required. You can either specify the IP address to be used or allow Cloud Load Balancing assign ane for you.

  • The forwarding rule for an HTTP load balancer can only reference TCP ports 80 and 8080.
  • The forwarding rule for an HTTPS load balancer tin only reference TCP port 443.

The type of forwarding dominion, IP address, and load balancing scheme used by external HTTP(S) load balancers depends on the mode of the load balancer and which Network Service Tier the load balancer is in.

Load balancer fashion Network Service Tier Forwarding rule, IP accost, and Load balancing scheme Routing from the internet to the load balancer frontend
Global external HTTP(S) load balancer (Preview) Premium Tier

Global external forwarding rule

Global external IP address

Load balancing scheme:
EXTERNAL_MANAGED

Requests routed to the GFE that is closest to the customer on the net.
Global external HTTP(S) load balancer (classic) Premium Tier

Global external forwarding dominion

Global external IP accost

Load balancing scheme:
EXTERNAL

Requests routed to the GFE that is closest to the customer on the internet.
Standard Tier

Regional external forwarding dominion

Regional external IP address

Load balancing scheme:
EXTERNAL

Requests routed to a GFE in the load balancer's region.
Regional external HTTP(S) load balancer (Preview) Standard Tier

Regional external forwarding rule

Regional external IP address

Load balancing scheme:
EXTERNAL_MANAGED

Requests routed to the Envoy proxies in the same region as the load balancer.

For the complete list of protocols supported by HTTP(Due south) Load Balancing forwarding rules in each mode, see Load balancer features.

Target proxies

Target proxies cease HTTP(Southward) connections from clients. Ane or more forwarding rules direct traffic to the target proxy, and the target proxy consults the URL map to determine how to route traffic to backends.

Do not rely on the proxy to preserve the case of request or response header names. For case, a Server: Apache/ane.0 response header might announced at the client as server: Apache/one.0.

The post-obit table specifies the type of target proxy required past HTTP(S) Load Balancing in each way.

Load balancer mode Target proxy types Proxy-added headers Custom headers supported Cloud Trace supported
Global external HTTP(S) load balancer (Preview) Global HTTP,
Global HTTPS
The proxies prepare HTTP request/response headers as follows:
  • Via: i.1 google (requests and responses)
  • X-Forwarded-Proto: [http | https] (requests merely)
  • X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
    Contains parameters for Cloud Trace.
  • X-Forwarded-For: [<supplied-value>,]<client-ip>,<load-balancer-ip> (meet Ten-Forwarded-For header) (requests only)
Configured on the backend service or backend bucket

Not supported with Cloud CDN

Global external HTTP(S) load balancer (archetype) Global HTTP,
Global HTTPS
The proxies set up HTTP asking/response headers as follows:
  • Via: ane.ane google (requests and responses)
  • X-Forwarded-Proto: [http | https] (requests only)
  • Ten-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests but)
    Contains parameters for Deject Trace.
  • 10-Forwarded-For: [<supplied-value>,]<client-ip>,<load-balancer-ip> (run into X-Forwarded-For header) (requests only)
Configured on the backend service or backend bucket
Regional external HTTP(S) load balancer (Preview) Regional HTTP,
Regional HTTPS
  • X-Forwarded-Proto: [http | https] (requests but)
  • Via: 1.1 google (requests and responses)
  • X-Forwarded-For: [<supplied-value>,]<client-ip>,<load-balancer-ip> (see X-Forwarded-For header) (requests only)

When the load balancer makes the HTTP request, the load balancer preserves the Host header of the original request.

The load balancer appends 2 IP addresses separated by a single comma to the 10-Forwarded-For header in the post-obit order:

  • The IP address of the client that connects to the load balancer
  • The IP address of the load balancer's forwarding rule

If there is no Ten-Forwarded-For header on the incoming request, these 2 IP addresses are the entire header value:

          Ten-Forwarded-For: <client-ip>,<load-balancer-ip>                  

If the asking includes an X-Forwarded-For header, the load balancer preserves the supplied value earlier the <client-ip>,<load-balancer-ip> :

          10-Forwarded-For: <supplied-value>,<customer-ip>,<load-balancer-ip>                  

When running HTTP reverse proxy software on the load balancer's backends, the software might suspend one or both of the following IP addresses to the end of the X-Forwarded-For header:

  • The IP accost of the Google Forepart Terminate (GFE) that connected to the backend. These IP addresses are in the 130.211.0.0/22 and 35.191.0.0/16 ranges.

  • The IP address of the backend system itself.

Thus, an upstream process afterward your load balancer's backend might receive an 10-Forwarded-For header of the form:

          <existing-values>,<client-ip>,<load-balancer-ip><GFE-IP><backend-IP>                  

HTTP/3 and QUIC protocol support

HTTP/iii is a side by side-generation cyberspace protocol. It is built on acme of QUIC, a protocol adult from the original Google QUIC) (gQUIC) protocol. HTTP/3 is supported between the external HTTP(S) load balancer, Cloud CDN, and clients.

Specifically:

  • IETF QUIC is a transport layer protocol that provides congestion control similar to TCP and is the security equivalent to SSL/TLS for HTTP/ii, with improved performance.
  • HTTP/three is an application layer congenital on peak of IETF QUIC, and it relies on QUIC to handle multiplexing, congestion control, and retries.
  • HTTP/3 allows faster client connectedness initiation, eliminates caput-of-line blocking in multiplexed streams, and supports connection migration when a customer's IP address changes.
  • HTTP/3 affects connections between clients and the load balancer, non connections between the load balancer and its backends.
  • HTTP/3 connections employ the BBR congestion control protocol.

Enabling HTTP/three on your load balancer tin can improve web folio load times, reduce video rebuffering, and ameliorate throughput on higher latency connections.

The following table specifies the HTTP/three back up for HTTP(Southward) Load Balancing in each style.

Load balancer mode HTTP/three support
Global external HTTP(S) load balancer (Preview)
Global external HTTP(S) load balancer (classic)
Regional external HTTP(S) load balancer (Preview)

Configuring HTTP/3

Y'all tin can explicitly enable HTTP/3 support for your load balancer past setting quicOverride to ENABLE.

Clients that do not support HTTP/3 or gQUIC practise not negotiate an HTTP/3 connection. You do not need to explicitly disable HTTP/3 unless you have identified broken client implementations.

HTTP(S) Load Balancing provides three ways to configure HTTP/3 as shown in the post-obit table.

quicOverride value Behavior
NONE

HTTP/iii and Google QUIC are not advertised to clients.

ENABLE

Support for HTTP/3 and Google QUIC are advertised to clients. HTTP/3 is advertised at a college priority. Clients that support both protocols should adopt HTTP/three over Google QUIC.

Notation: TLS 0-RTT (besides known every bit TLS Early on Data) is implicitly supported when Google QUIC is negotiated by the client, merely information technology is not currently supported when HTTP/3 is used.

DISABLE Explicitly disables advertising HTTP/three and Google QUIC to clients.

To explicitly enable (or disable) HTTP/iii, follow these steps.

Console: HTTPS

  1. In the Google Cloud Console, go to the Load balancing folio.

    Go to Load balancing

  2. Select the load balancer that you want to edit.

  3. Click Frontend configuration.

  4. Select the frontend IP address and port that y'all want to edit. To edit HTTP/3 configurations, the IP accost and port must be HTTPS (port 443).

Enable HTTP/three

  1. Select the QUIC negotiation drop-downward.
  2. To explicitly enable HTTP/three for this frontend, select Enabled.
  3. If you lot have multiple frontend rules representing IPv4 and IPv6, make sure to enable HTTP/three on each rule.

Disable HTTP/3

  1. Select the QUIC negotiation drop-down.
  2. To explicitly disable HTTP/3 for this frontend, select Disabled.
  3. If y'all accept multiple frontend rules representing IPv4 and IPv6, brand sure to disable HTTP/3 for each dominion.

gcloud: HTTPS

Earlier you run this command, you lot must create an SSL certificate resource for each certificate.

              gcloud compute target-https-proxies create              HTTPS_PROXY_NAME              \      --global \      --quic-override=QUIC_SETTING            

Supersede QUIC_SETTING with one of the following:

  • NONE (default): allows Google to control when QUIC is negotiated

    Currently, when y'all select NONE, QUIC is disabled. By selecting this option, you are allowing Google to automatically enable QUIC negotiations and HTTP/3 in the future for this load balancer. In the Cloud Console, this selection is chosen Automatic (Default).

  • ENABLE: advertises HTTP/3 and Google QUIC to clients

  • DISABLE: does not advertise HTTP/iii or Google QUIC to clients

API: HTTPS

POST https://www.googleapis.com/v1/compute/projects/PROJECT_ID/global/targetHttpsProxies/TARGET_PROXY_NAME/setQuicOverride  {   "quicOverride":              QUIC_SETTING              }            

Replace QUIC_SETTING with one of the following:

  • NONE (default): allows Google to command when QUIC is negotiated

    Currently, when you select NONE, QUIC is disabled. By selecting this option, you are allowing Google to automatically enable QUIC negotiations and HTTP/three in the future for this load balancer. In the Cloud Console, this selection is called Automatic (Default).

  • ENABLE: advertises HTTP/3 and Google QUIC to clients

  • DISABLE: does non advertise HTTP/three or Google QUIC to clients

How HTTP/3 is negotiated

When HTTP/3 is enabled, the load balancer advertises this support to clients, assuasive clients that support HTTP/3 to attempt to constitute HTTP/3 connections with the HTTPS load balancer.

  • Properly implemented clients always fall dorsum to HTTPS or HTTP/two when they cannot establish a QUIC connexion.
  • Clients that support HTTP/3 use their cached prior knowledge of HTTP/iii back up to save unnecessary round-trips in the futurity.
  • Because of this fallback, enabling or disabling QUIC in the load balancer does not disrupt the load balancer's ability to connect to clients.

Back up is advertised in the Alt-Svc HTTP response header. When HTTP/3 is configured every bit ENABLE on an external HTTP(Southward) load balancer'south targetHttpsProxy resource, responses from the load balancer include the following alt-svc header value:

alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000, h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000, quic=":443"; ma=2592000; v="46,43"        

If HTTP/three has been explicitly fix to DISABLE, responses practise not include an alt-svc response header.

When you have QUIC enabled on your HTTPS load balancer, some circumstances can cause your client to fall dorsum to HTTPS or HTTP/2 instead of negotiating QUIC. These include the following:

  • When a client supports versions of HTTP/iii that are not uniform with the HTTP/3 versions supported by the HTTPS load balancer.
  • When the load balancer detects that UDP traffic is blocked or rate-express in a way that would prevent HTTP/3 (QUIC) from working.
  • The customer does not support HTTP/3 at all, and thus does not attempt to negotiate an HTTP/3 connection.

When a connection falls dorsum to HTTPS or HTTP/2, we do not count this as a failure of the load balancer.

Earlier yous enable HTTP/iii, ensure that the previously described behaviors are acceptable for your workloads.

URL maps

URL maps ascertain matching patterns for URL-based routing of requests to the appropriate backend services. A default service is defined to handle any requests that exercise not match a specified host rule or path matching rule. In some situations, such equally the multi-region load balancing example, you might non define any URL rules and rely only on the default service. For request routing, the URL map allows you to divide your traffic by examining the URL components to send requests to different sets of backends.

URL maps used with global external HTTP(S) load balancers and regional external HTTP(S) load balancer support several avant-garde traffic management features such every bit header-based traffic steering, weight-based traffic splitting, and request mirroring. For more information, run into the following:

  • Traffic management overview for global external HTTP(S) load balancer.
  • Traffic management overview for regional external HTTP(S) load balancer.

The following table specifies the type of URL map required past HTTP(Southward) Load Balancing in each style.

Load balancer fashion URL map type
Global external HTTP(South) load balancer (Preview) Global
Global external HTTP(S) load balancer (classic) Global (with only a subset of the features supported)
Regional external HTTP(South) load balancer (Preview) Regional

SSL certificates

Transport Layer Security (TLS) is an encryption protocol used in SSL certificates to protect network communications.

Google Cloud uses SSL certificates to provide privacy and security from a client to a load balancer. If you are using HTTPS-based load balancing, you must install i or more SSL certificates on the target HTTPS proxy.

The post-obit table specifies the scope for the SSL certificate required past HTTP(S) Load Balancing in each way:

Load balancer way SSL certificate scope
Global external HTTP(S) load balancer (Preview) Global
Global external HTTP(S) load balancer (classic) Global
Regional external HTTP(S) load balancer (Preview) Regional

For more information about SSL certificates, see the post-obit:

  • SSL certificates overview
  • Serving multiple SSL certificates
  • Cocky-managed certificates
  • Google-managed certificates
  • SSL certificates quotas on the load balancing quotas page
  • Encryption from the load balancer to the backends
  • Encryption in Transit in Google Cloud white paper

Certificate Managing director

If you are using the global external HTTP(S) load balancer (archetype) on the Premium Network Service Tier, you can use the Preview release of Certificate Manager to provision and manage your SSL certificates across multiple instances of the global external HTTP(Southward) load balancer (classic) at scale. For more than information, encounter the Document Manager overview.

SSL policies

SSL policies give you the ability to control the features of SSL that your HTTPS load balancer negotiates with HTTPS clients.

By default, HTTPS Load Balancing uses a set of SSL features that provides good security and broad compatibility. Some applications require more than control over which SSL versions and ciphers are used for their HTTPS or SSL connections. Yous can ascertain SSL policies that control the features of SSL that your load balancer negotiates and associate an SSL policy with your target HTTPS proxy.

The following table specifies the SSL policy support for load balancers in each mode.

Load balancer mode SSL policies supported
Global external HTTP(S) load balancer (Preview)
Global external HTTP(Southward) load balancer (classic)
Regional external HTTP(S) load balancer (Preview)

Backend services and buckets

Backend services provide configuration information to the load balancer. Load balancers apply the information in a backend service to direct incoming traffic to 1 or more attached backends. For an example showing how to set up a load balancer with a backend service and a Compute Engine backend, encounter Setting up an external HTTP(S) load balancer with a Compute Engine backend.

Backend buckets straight incoming traffic to Cloud Storage buckets. For an instance showing how to add a saucepan to a external HTTP(South) load balancer, see Setting up a load balancer with backend buckets.

The following table specifies the backend features supported by HTTP(South) Load Balancing in each mode.


Load balancer mode
Supported backends on a backend service Supports backend buckets Supports Google Cloud Armor Supports Cloud CDN Supports IAP
Instance groups Zonal NEGs Internet NEGs Serverless NEGs Hybrid NEGs Private Service Connect NEGs
Global external HTTP(S) load balancer (Preview)
Global external HTTP(S) load balancer (archetype)
when using Premium Tier

Regional external HTTP(S) load balancer (Preview)

For more information, see the following:

  • Backend services overview
  • What is Cloud Storage?

Backends and VPC networks

The restrictions on where backends can be located depend on the type of load balancer.

  • For the global external HTTP(S) load balancer and the global external HTTP(S) load balancer (classic), all backends must be located in the same project but can be located in unlike VPC networks. The different VPC networks do not need to be connected using VPC Network Peering because GFE proxy systems communicate directly with backends in their respective VPC networks.
  • For the regional external HTTP(Due south) load balancer, all backends must be located in the same VPC network and region.

Protocol to the backends

When you configure a backend service for the load balancer, you set the protocol that the backend service uses to communicate with the backends. You can choose HTTP, HTTPS, or HTTP/2. The load balancer uses but the protocol that you lot specify. The load balancer does not autumn dorsum to one of the other protocols if it is unable to negotiate a connection to the backend with the specified protocol.

If you use HTTP/two, y'all must use TLS. HTTP/two without encryption is non supported.

For the complete list of protocols supported, meet Load balancing features: Protocols from the load balancer to the backends.

WebSocket back up

Google Cloud HTTP(Due south)-based load balancers have native support for the WebSocket protocol when you use HTTP or HTTPS equally the protocol to the backend. The load balancer does not need whatsoever configuration to proxy WebSocket connections.

The WebSocket protocol provides a full-duplex advice aqueduct between clients and servers. An HTTP(S) asking initiates the aqueduct. For detailed data about the protocol, see RFC 6455.

When the load balancer recognizes a WebSocket Upgrade request from an HTTP(S) client followed by a successful Upgrade response from the backend instance, the load balancer proxies bidirectional traffic for the duration of the current connection. If the backend instance does not return a successful Upgrade response, the load balancer closes the connection.

The timeout for a WebSocket connectedness depends on the configurable backend service timeout of the load balancer, which is 30 seconds by default. This timeout applies to WebSocket connections regardless of whether they are in utilise.

Session affinity for WebSockets works the aforementioned as for any other asking. For information, see Session affinity.

Using gRPC with your Google Cloud applications

gRPC is an open-source framework for remote procedure calls. It is based on the HTTP/two standard. Use cases for gRPC include the following:

  • Low-latency, highly scalable, distributed systems
  • Developing mobile clients that communicate with a cloud server
  • Designing new protocols that must exist authentic, efficient, and linguistic communication-contained
  • Layered design to enable extension, authentication, and logging

To use gRPC with your Google Cloud applications, y'all must proxy requests finish-to-stop over HTTP/ii. To do this:

  1. Configure an HTTPS load balancer.
  2. Enable HTTP/two as the protocol from the load balancer to the backends.

The load balancer negotiates HTTP/2 with clients every bit part of the SSL handshake by using the ALPN TLS extension.

The load balancer may all the same negotiate HTTPS with some clients or accept insecure HTTP requests on a load balancer that is configured to employ HTTP/2 betwixt the load balancer and the backend instances. Those HTTP or HTTPS requests are transformed by the load balancer to proxy the requests over HTTP/two to the backend instances.

You must enable TLS on your backends. For more data, see Encryption from the load balancer to the backends.

If you want to configure an external HTTP(Due south) load balancer by using HTTP/2 with Google Kubernetes Engine Ingress or by using gRPC and HTTP/2 with Ingress, run across HTTP/ii for load balancing with Ingress.

For information about troubleshooting problems with HTTP/ii, encounter Troubleshooting problems with HTTP/ii to the backends.

For data well-nigh HTTP/two limitations, run into HTTP/2 limitations.

Health checks

Each backend service specifies a health check for backend instances.

For the health check probes, you must create an ingress permit firewall dominion that allows traffic to achieve your backend instances. The firewall rule must permit the following source ranges:

  • 130.211.0.0/22
  • 35.191.0.0/16

Although it is not required, information technology is a all-time practice to use a wellness check whose protocol matches the protocol of the backend service. For case, an HTTP/ii health cheque most accurately tests HTTP/two connectivity to backends. For the list of supported wellness check protocols, run into Load balancing features.

The following table specifies the scope of wellness check supported past HTTP(S) Load Balancing in each style.

Load balancer style Health check blazon
Global external HTTP(South) load balancer (Preview) Global
Global external HTTP(Southward) load balancer (classic) Global
Regional external HTTP(S) load balancer (Preview) Regional

For more than data near health checks, see the following:

  • Health checks overview
  • Creating health checks

Firewall rules

HTTP(Southward) Load Balancing requires the following firewall rules:

  • For the global external HTTP(S) load balancers, an ingress allow rule to permit traffic from Google Front Ends (GFEs) to reach your backends.
    For the regional external HTTP(S) load balancer, an ingress allow rule to permit traffic from the proxy-just subnet to attain your backends.
  • An ingress allow rule to allow traffic from the health check probes ranges. For more data about health check probes and why it'south necessary to permit traffic from them, encounter Probe IP ranges and firewall rules.

Firewall rules are implemented at the VM instance level, not on GFE proxies. You cannot utilise Google Cloud firewall rules to prevent traffic from reaching the load balancer. For the global external HTTP(S) load balancer and the global external HTTP(S) load balancer (archetype), you can use Google Cloud Armor to achieve this.

The ports for these firewall rules must be configured as follows:

  • Allow traffic to the destination port for each backend service'due south health bank check.

  • For instance group backends: Determine the ports to exist configured past the mapping between the backend service'due south named port and the port numbers associated with that named port on each case grouping. The port numbers tin can vary amid example groups assigned to the same backend service.

  • For GCE_VM_IP_PORT NEG backends: Allow traffic to the port numbers of the endpoints.

The following tabular array summarizes the required source IP accost ranges for the firewall rules:

Load balancer fashion Health check source ranges Request source ranges
Global external HTTP(S) load balancer (Preview)
  • 130.211.0.0/22
  • 35.191.0.0/xvi
The source of GFE traffic depends on the backend type:
  • Example groups, GCE_VM_IP_PORT NEGs, and NON_GCP_PRIVATE_IP_PORT NEGs:
    • 130.211.0.0/22
    • 35.191.0.0/sixteen
  • INTERNET_FQDN_PORT and INTERNET_IP_PORT NEGs:
    • 34.96.0.0/xx
    • 34.127.192.0/eighteen
  • SERVERLESS NEGs and backend buckets: Google'south production network handles package routing
Global external HTTP(South) load balancer (classic)
  • 130.211.0.0/22
  • 35.191.0.0/16
The source of GFE traffic depends on the backend blazon:
  • Case groups, zonal NEGs (GCE_VM_IP_PORT), and hybrid connectivity NEGs (NON_GCP_PRIVATE_IP_PORT):
    • 130.211.0.0/22
    • 35.191.0.0/sixteen
  • Internet NEGs (INTERNET_FQDN_PORT and INTERNET_IP_PORT):
    • 34.96.0.0/20
    • 34.127.192.0/18
  • SERVERLESS NEGs and backend buckets: Google'south production network handles bundle routing.
Regional external HTTP(Southward) load balancer (Preview)
  • 130.211.0.0/22
  • 35.191.0.0/16
The Proxy-only subnet that y'all configure.

How connections piece of work in HTTP(Due south) Load Balancing

Global external HTTP(S) load balancer connections

The global external HTTP(Southward) load balancers are implemented by many proxies chosen Google Front Ends (GFEs). In that location isn't only a unmarried proxy. In Premium Tier, the same global external IP address is advertised from diverse points of presence, and client requests are directed to the customer'southward nearest GFE.

Depending on where your clients are, multiple GFEs can initiate HTTP(South) connections to your backends. Packets sent from GFEs have source IP addresses from the same range used past wellness cheque probers: 35.191.0.0/16 and 130.211.0.0/22.

Depending on the backend service configuration, the protocol used past each GFE to connect to your backends can be HTTP, HTTPS, or HTTP/2. For HTTP or HTTPS connections, the HTTP version used is HTTP one.1.

HTTP keepalive is enabled by default, as specified in the HTTP ane.1 specification. HTTP keepalives attempt to efficiently utilize the same TCP session; even so, there's no guarantee. The GFE uses a keepalive timeout of 600 seconds, and you cannot configure this. You lot tin can, however, configure the request/response timeout by setting the backend service timeout. Though closely related, an HTTP keepalive and a TCP idle timeout are non the aforementioned affair. For more information, run across timeouts and retries.

The numbers of HTTP connections and TCP sessions vary depending on the number of GFEs connecting, the number of clients connecting to the GFEs, the protocol to the backends, and where backends are deployed.

For more data, see How HTTP(Southward) Load Balancing works in the solutions guide: Application Capacity Optimizations with Global Load Balancing.

Regional external HTTP(Southward) load balancer connections

The regional external HTTP(Southward) load balancer is a managed service implemented on the Envoy proxy. The regional external HTTP(Southward) load balancer uses a shared subnet called a proxy-only subnet to provision a set of IP addresses that Google uses to run Envoy proxies on your behalf. The --purpose flag for this proxy-but subnet is set to REGIONAL_MANAGED_PROXY. All regional external HTTP(S) load balancers in a particular network and region share this subnet.

Clients utilize the load balancer's IP address and port to connect to the load balancer. Customer requests are directed to the proxy-simply subnet in the same region equally the client. The load balancer terminates clients requests and so opens new connections from the proxy-only subnet to your backends. Therefore, packets sent from the load balancer have source IP addresses from the proxy-only subnet.

Depending on the backend service configuration, the protocol used by Envoy proxies to connect to your backends tin can exist HTTP, HTTPS, or HTTP/2. If HTTP or HTTPS, the HTTP version is HTTP ane.1. HTTP keepalive is enabled by default, equally specified in the HTTP 1.1 specification. The Envoy proxy uses a keepalive timeout of 600 seconds, and you cannot configure this. You tin, however, configure the request/response timeout by setting the backend service timeout. For more information, see timeouts and retries.

Client communications with the load balancer

  • Clients tin communicate with the load balancer by using the HTTP ane.1 or HTTP/2 protocol.
  • When HTTPS is used, modern clients default to HTTP/2. This is controlled on the client, non on the HTTPS load balancer.
  • You cannot disable HTTP/ii past making a configuration alter on the load balancer. Even so, you can configure some clients to use HTTP 1.i instead of HTTP/2. For case, with curlicue, use the --http1.1 parameter.
  • HTTP(Due south) Load Balancing supports the HTTP/1.1 100 Go along response.

For the complete list of protocols supported by HTTP(Southward) Load Balancing forwarding rules in each mode, see Load balancer features.

Source IP addresses for customer packets

The source IP address for packets, as seen by the backends, is not the Google Cloud external IP address of the load balancer. In other words, there are two TCP connections.

For the global external HTTP(S) load balancers:

  • Connexion 1, from original customer to the load balancer (GFE):

    • Source IP address: the original client (or external IP address if the client is behind NAT or a forward proxy).
    • Destination IP address: your load balancer'due south IP address.
  • Connection 2, from the load balancer (GFE) to the backend VM or endpoint:

    • Source IP address: an IP address in one of the ranges specified in Firewall rules.

    • Destination IP accost: the internal IP address of the backend VM or container in the VPC network.

For the regional external HTTP(South) load balancers:

  • Connection 1, from original client to the load balancer (proxy-but subnet):

    • Source IP address: the original customer (or external IP address if the client is behind NAT or a frontwards proxy).
    • Destination IP address: your load balancer's IP address.
  • Connection 2, from the load balancer (proxy-just subnet) to the backend VM or endpoint:

    • Source IP accost: an IP accost in the proxy-just subnet that is shared among all the Envoy-based load balancers deployed in the same region and network as the load balancer.

    • Destination IP address: the internal IP address of the backend VM or container in the VPC network.

Return path

For the global external HTTP(S) load balancers, Google Cloud uses special routes not defined in your VPC network for health checks. For more than information, come across Load balancer return paths.

For regional external HTTP(S) load balancers, Google Cloud uses open-source Envoy proxies to cease client requests to the load balancer. The load balancer terminates the TCP session and opens a new TCP session from the region'due south proxy- only subnet to your backend. Routes defined inside your VPC network facilitate advice from Envoy proxies to your backends and from your backends to the Envoy proxies.

Open ports

This section applies merely to the global external HTTP(South) load balancers which are implemented using GFEs.

GFEs have several open ports to support other Google services that run on the same architecture. To see a list of some of the ports likely to be open on GFEs, encounter Forwarding dominion: Port specifications. There might be other open up ports for other Google services running on GFEs.

Running a port scan on the IP address of a GFE-based load balancer is not useful from an auditing perspective for the following reasons:

  • A port scan (for instance, with nmap) generally expects no response packet or a TCP RST packet when performing TCP SYN probing. GFEs will send SYN-ACK packets in response to SYN probes for a multifariousness of ports if your load balancer uses a Premium Tier IP address. However, GFEs merely send packets to your backends in response to packets sent to your load balancer'southward IP address and the destination port configured on its forwarding rule. Packets sent to different load balancer IP addresses or your load balancer'south IP address on a port non configured in your forwarding dominion exercise not result in packets being sent to your load balancer's backends. GFEs implement security features such as Google Cloud Armor. Fifty-fifty without a Google Cloud Armor configuration, Google infrastructure and GFEs provide defence force-in-depth for DDoS attacks and SYN floods.

  • Packets sent to the IP address of your load balancer could be answered past any GFE in Google's fleet; however, scanning a load balancer IP address and destination port combination but interrogates a single GFE per TCP connection. The IP accost of your load balancer is not assigned to a unmarried device or system. Thus, scanning the IP address of a GFE-based load balancer does non browse all the GFEs in Google's fleet.

With that in mind, the post-obit are some more than effective ways to audit the security of your backend instances:

  • A security auditor should inspect the forwarding rules configuration for the load balancer's configuration. The forwarding rules define the destination port for which your load balancer accepts packets and forrard them to the backends. For GFE-based load balancers, each external forwarding rule tin can merely reference a unmarried destination TCP port. For a load balancer using TCP port 443, UDP port 443 is used when the connection is upgraded to QUIC (HTTP/3).

  • A security accountant should inspect the firewall rule configuration applicable to backend VMs. The firewall rules that you gear up block traffic from the GFEs to the backend VMs, but exercise not block incoming traffic to the GFEs. For best practices, see the firewall rules section.

TLS termination

The following table summarizes how TLS termination is handled by external HTTP(Southward) load balancers in each mode.

Load balancer manner TLS termination
Global external HTTP(S) load balancer (Preview) TLS is terminated on a GFE, which can be anywhere in the world.
Global external HTTP(South) load balancer (classic) TLS is terminated on a GFE, which could be anywhere in the world.
Regional external HTTP(S) load balancer (Preview) TLS is terminated on Envoy proxies located in a proxy-only subnet in a region called by the user. Use this load balancer mode if you need geographic control over the region where TLS is terminated.

Timeouts and retries

  • A configurable HTTP backend service timeout, which represents the amount of time the load balancer waits for your backend to return a complete HTTP response. The default value for the backend service timeout is 30 seconds. The full range of timeout values allowed is ane-two,147,483,647 seconds.

    For example, if the value of the backend service timeout is the default value of xxx seconds, the backends have 30 seconds to reply to requests. The load balancer retries the HTTP GET request once if the backend closes the connexion or times out earlier sending response headers to the load balancer. If the backend sends response headers or if the asking sent to the backend is not an HTTP Become request, the load balancer does not retry. If the backend does non reply at all, the load balancer returns an HTTP 5xx response to the client. For these load balancers, change the timeout value if you desire to allow more or less time for the backends to respond to requests.

    Consider increasing this timeout nether whatever of these circumstances:

    • You expect a backend to accept longer to return HTTP responses.
    • The connexion is upgraded to a WebSocket.

    The backend service timeout you set is a best-effort goal. It does not guarantee that underlying TCP connections will stay open for the elapsing of that timeout.

    For more information, come across Backend service settings.

    The backend service timeout is not an HTTP idle (keepalive) timeout. It is possible that input and output (IO) from the backend is blocked due to a slow client (a browser with a slow connection, for example). This wait time isn't counted against the backend service timeout.

  • An HTTP keepalive timeout, whose value is fixed at 10 minutes (600 seconds). This value is non configurable by modifying your backend service. You lot must configure the web server software used by your backends so that its keepalive timeout is longer than 600 seconds to prevent connections from being closed prematurely by the backend. This timeout does not apply to WebSockets. This table illustrates changes necessary to modify keepalive timeouts for common web server software:
    Web server software Parameter Default setting Recommended setting
    Apache KeepAliveTimeout KeepAliveTimeout 5 KeepAliveTimeout 620
    nginx keepalive_timeout keepalive_timeout 75s; keepalive_timeout 620s;

The WebSocket protocol is supported with GKE Ingress.

Illegal request and response handling

The load balancer blocks both client requests and backend responses from reaching the backend or the customer, respectively, for a number of reasons. Some reasons are strictly for HTTP/1.ane compliance and others are to avoid unexpected data being passed to or from the backends. None of the checks tin can be disabled.

The load balancer blocks the post-obit for HTTP/1.1 compliance:

  • Information technology cannot parse the first line of the request.
  • A header is missing the : delimiter.
  • Headers or the outset line incorporate invalid characters.
  • The content length is non a valid number, or in that location are multiple content length headers.
  • There are multiple transfer encoding keys, or at that place are unrecognized transfer encoding values.
  • There's a non-chunked body and no content length specified.
  • Trunk chunks are unparseable. This is the but case where some data reaches the backend. The load balancer closes the connections to the customer and backend when it receives an unparseable chunk.

The load balancer blocks the request if any of the following are true:

  • The total size of asking headers and the request URL exceeds the limit for the maximum request header size for external HTTP(S) Load Balancing.
  • The request method does not allow a body, but the request has one.
  • The request contains an Upgrade header, and the Upgrade header is not used to enable WebSocket connections.
  • The HTTP version is unknown.

The load balancer blocks the backend's response if whatever of the following are truthful:

  • The full size of response headers exceeds the limit for maximum response header size for external HTTP(S) Load Balancing.
  • The HTTP version is unknown.

Traffic distribution

When y'all add a backend example group or NEG to a backend service, you specify a balancing way, which defines a method measuring backend load and a target capacity. External HTTP(South) Load Balancing supports two balancing modes:

  • Rate, for case groups or NEGs, is the target maximum number of requests (queries) per 2d (RPS, QPS). The target maximum RPS/QPS can be exceeded if all backends are at or higher up capacity.

  • UTILIZATION is the backend utilization of VMs in an instance group.

How traffic is distributed among backends depends on the style of the load balancer.

Global external HTTP(S) load balancer

Earlier a Google Front end Cease (GFE) sends requests to backend instances, the GFE estimates which backend instances have chapters to receive requests. This chapters interpretation is made proactively, not at the same fourth dimension as requests are arriving. The GFEs receive periodic information about the available capacity and distribute incoming requests accordingly.

What chapters means depends in part on the balancing mode. For the RATE mode, it is relatively simple: a GFE determines exactly how many requests it can assign per second. UTILIZATION-based load balancing is more circuitous: the load balancer checks the instances' electric current utilization and then estimates a query load that each case can handle. This judge changes over time as example utilization and traffic patterns change.

Both factors—the capacity estimation and the proactive assignment—influence the distribution among instances. Thus, Cloud Load Balancing behaves differently from a simple round-robin load balancer that spreads requests exactly 50:50 between ii instances. Instead, Google Cloud load balancing attempts to optimize the backend case selection for each request.

For the global external HTTP(Southward) load balancer (classic), the balancing mode is used to select the near favorable backend (case group or NEG). Traffic is then distributed in a round robin fashion amongst instances or endpoints within the backend.

For the global external HTTP(South) load balancer, load balancing is two-tiered. The balancing manner determines the weighting or fraction of traffic that should be sent to each backend (case group or NEG). Then, the load balancing policy (LocalityLbPolicy) determines how traffic is distributed to instances or endpoints within the grouping. For more than information, see the Load balancing locality policy (backend service API documentation).

Regional external HTTP(S) load balancer

For regional external HTTP(S) load balancers, traffic distribution is based on the load balancing way and the load balancing locality policy.

The balancing mode determines the weight and fraction of traffic that should exist sent to each grouping (instance grouping or NEG). The load balancing locality policy (LocalityLbPolicy) determines how backends within the group are load counterbalanced.

When a backend service receives traffic, it first directs traffic to a backend (instance grouping or NEG) according to the backend's balancing manner. After a backend is selected, traffic is and so distributed among instances or endpoints in that backend grouping according to the load balancing locality policy.

For more than information, run across the post-obit:

  • Balancing modes
  • Load balancing locality policy (regional backend service API documentation).

How requests are distributed

Whether traffic is distributed regionally or globally depends on which load balancer mode and network service tier is in use.

For Premium Tier:

  • Google advertises your load balancer'south IP accost from all points of presence, worldwide. Each load balancer IP address is global anycast.
  • If yous configure a backend service with backends in multiple regions, Google Front end Ends (GFEs) attempt to direct requests to salubrious backend instance groups or NEGs in the region closest to the user. Details for the process are documented on this page.

For Standard Tier:

  • Google advertises your load balancer's IP address from points of presence associated with the forwarding dominion'southward region. The load balancer uses a regional external IP accost.

  • You can configure backends in the same region equally the forwarding rule. The process documented here nevertheless applies, just the load balancer only directs requests to healthy backends in that 1 region.

Request distribution process:

The balancing mode and selection of target define backend fullness from the perspective of each zonal GCE_VM_IP_PORT NEG, zonal instance group, or zone of a regional case group. Distribution within a zone is done with consequent hashing for global external HTTP(S) load balancer (classic) and is configurable using the load balancing locality policy for global external HTTP(S) load balancer and regional external HTTP(S) load balancer.

GFE-based global external HTTP(South) load balancers utilise the following procedure to distribute incoming requests:

  1. The forwarding rule'south external IP address is advertised by edge routers at the borders of Google's network. Each advertisement lists a side by side hop to a Layer iii/4 load balancing arrangement (Maglev) as close to the user as possible.
  2. Maglev systems inspect the source IP address of the incoming packet. They directly the incoming request to the Maglev systems that Google'south geo-IP systems make up one's mind are as close to the user equally possible.
  3. The Maglev systems route traffic to a first-layer Google Front End (GFE). The showtime-layer GFE terminates TLS if required and then routes traffic to 2d-layer GFEs according to this procedure:
    1. The URL map selects a backend service.
    2. If a backend service uses instance group or GCE_VM_IP_PORT NEG backends, the first layer-GFEs prefer second-layer GFEs that are located in or almost the region that contains the example group or NEG.
    3. For backend buckets and backend services with hybrid NEGs, serverless NEGs, and internet NEGs, the showtime-layer GFEs cull second-layer GFEs in a subset of regions such that the round trip time between the ii GFEs is minimized.

      Second-layer GFE preference is non a guarantee, and it can dynamically modify based on Google'due south network atmospheric condition and maintenance.

      Second-layer GFEs are aware of health bank check status and actual backend chapters usage.

  4. The 2d-layer GFE directs requests to backends in zones within its region.
  5. For Premium Tier, sometimes 2d-layer GFEs transport requests to backends in zones of different regions. This beliefs is called spillover.
  6. Spillover is governed by two principles:

    • Spillover is possible when all backends known to a second-layer GFE are at capacity or are unhealthy.
    • The second-layer GFE has information for healthy, available backends in zones of a different region.

    The 2d-layer GFEs are typically configured to serve a subset of backend locations.

    Spillover behavior does not frazzle all possible Google Cloud zones. If yous need to direct traffic abroad from backends in a detail zone or in an entire region, you must ready the capacity scaler to zero. Configuring backends to neglect health checks does not guarantee that the 2nd-layer GFE spills over to backends in zones of a different region.

  7. When distributing requests to backends, GFEs operate at a zonal level.

    With a low number of requests per 2nd, second-layer GFEs sometimes prefer one zone in a region over the other zones. This preference is normal and expected. The distribution amid zones in the region doesn't become fifty-fifty until the load balancer receives more requests per second.

Session analogousness

Session affinity provides a best-attempt attempt to send requests from a detail client to the same backend for as long every bit the backend is healthy and has the capacity, according to the configured balancing style.

When you use session affinity, we recommend the RATE balancing fashion rather than UTILIZATION. Session affinity works all-time if you set the balancing way to requests per 2nd (RPS).

HTTP(S) Load Balancing offers the following types of session affinity:

  • NONE. Session affinity is not set for the load balancer.
  • Customer IP affinity
  • Generated cookie affinity
  • Header field analogousness
  • HTTP Cookie affinity

The post-obit table summarizes the supported session analogousness options for each mode of HTTP(S) Load Balancing:

Load balancer mode Session affinity options
None Client IP Generated cookie Header field HTTP cookie
Global external HTTP(Southward) load balancer (Preview)
Global external HTTP(South) load balancer (classic)
Regional external HTTP(S) load balancer (Preview)

HTTP/2 support

HTTP/2 max concurrent streams

The HTTP/2 SETTINGS_MAX_CONCURRENT_STREAMS setting describes the maximum number of streams that an endpoint accepts, initiated by the peer. The value advertised by an HTTP/2 client to a Google Cloud load balancer is effectively meaningless, considering the load balancer doesn't initiate streams to the client.

In cases where the load balancer uses HTTP/2 to communicate with a server that is running on a VM, the load balancer respects the SETTINGS_MAX_CONCURRENT_STREAMS value advertised by the server. If a value of zero is advertised, the load balancer tin't frontwards requests to the server, and this might result in errors.

HTTP/ii limitations

  • HTTP/2 between the load balancer and the example can require significantly more TCP connections to the example than HTTP(S). Connection pooling, an optimization that reduces the number of these connections with HTTP(South), is not currently bachelor with HTTP/two.
  • HTTP/2 betwixt the load balancer and the backend does not support running the WebSocket Protocol over a single stream of an HTTP/ii connectedness (RFC 8441).
  • HTTP/2 between the load balancer and the backend does not back up server push.
  • gRPC error charge per unit and request volume aren't visible in the Google Deject API or the Deject Console. If the gRPC endpoint returns an mistake, the load balancer logs and monitoring data study the 'OK 200' HTTP response code.

Limitations

  • HTTPS load balancers do not support client document-based authentication, besides known as mutual TLS authentication.
  • HTTPS load balancers do not send a close_notify closure alert when terminating SSL connections. That is, the load balancer closes the TCP connection instead of performing an SSL shutdown.
  • HTTPS load balancers support but lowercase characters in domains in a common name (CN) attribute or a subject culling name (SAN) attribute of the certificate. Certificates with capital letter characters in domains are returned only when set as the master certificate in the target proxy.
  • HTTPS load balancers do non utilize the Server Proper noun Indication (SNI) extension when connecting to the backend, except for load balancers with Net NEG backends. For more details, see Encryption from the load balancer to the backends.
  • When Google Deject Armor is used with the global external HTTP(S) load balancer with advanced traffic management adequacy, certain interactions between Google Cloud Armor rules and EXTERNAL_MANAGED backend services might consequence in HTTP 408 timeouts, specially when requests include POST bodies.

What's next

  • To learn how to deploy a global external HTTP(Due south) load balancer, see Setting upward an external HTTP(S) load balancer with a Compute Engine backend.
  • To acquire how to deploy a regional external HTTP(Southward) load balancer, meet Setting upwardly a regional external HTTP(S) load balancer with a Compute Engine backend.
  • To learn how to automate your external HTTP(S) Load Balancing setup with Terraform, come across Terraform module examples for external HTTP(S) load balancers.
  • To discover the locations for Google PoPs, run into GFE locations.
  • To learn almost capacity management, run into Capacity Direction with Load Balancing tutorial and Application Capacity Optimizations with Global Load Balancing.
  • To learn about serving websites, see Serving websites.
  • To learn how to use Document Manager to provision and manage SSL certificates, see the Certificate Manager overview.

jonesgleir1996.blogspot.com

Source: https://cloud.google.com/load-balancing/docs/https

0 Response to "aws elb what to put in host header"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel