MuleSoft CloudHub — Some Keynotes
Disclaimer
The main purpose of this article is to point out some internal functioning of the MuleSoft CloudHub based on some recent experiments. There is this official documentation regarding the CloudHub networking, but I believe it still needs some more explanation. Let's say,
Why a single Mule application can be deployed in a single worker?
So, the purpose of this article is to explore such questions.
Some Concepts
CloudHub is nothing but a cloud of Worker nodes. Now you may ask what the heck a Worker is?
The name itself suggests that it's something that works for us, or computes for us. In other words, it's a machine that has a Mule ESB installed that can run a Mule application.
Now, a worker has got some properties as any machine. Note that it's nothing but an AWS EC2 instance. So, all properties of an EC2 instance are applicable to a worker.
The basic properties are CPU, RAM, and Disk Storage.
You can make a comparison with an EC2 instance type's properties.
CloudHub Deployment
So, what happens when you deploy a Mule Application in CloudHub? Below things happen.
Define a unique name for the application
Let's say I have the below applications deployed in my CloudHub account.
Now, I try to deploy an application named test.
The app name did not pass the check for "Application name is available". So, where the heck is this application named test?
Now comes the CNAME fundamentals.
A Canonical Name record is a type of resource record in the Domain Name System that maps one domain name to another.
Does not make sense either to me or to a newbie. Let's try to simplify it.
So, a CNAME is something that points to an A record, and an A record is something that points to an actual IP address. This understanding is enough for this article.
Let's check the below diagram.
Now it makes some sense. CloudHub provides a load balancing service.
The A record of the LB has the format region.cloudhub.io. In my case it’s us-e2.cloudhub.io.
Every mule application deployed in the CloudHub is assigned a CNAME record.
So, if I have two applications ag-mock-1 & ag-mock-2 deployed in CloudHub they will have two corresponding CNAMEs registered as shown in the below diagram.
We can also verify that ag-mock-1.us-e2.cloudhub.io or ag-mock-2.us-e2.cloudhub.io are CNAME entries.
Cool!.
You all know that the worker pool or EC2 instances don't belong to your organization. Specifically, they are just EC2 instances running somewhere in the AWS cloud. Only the MulsSoft cloud team has knowledge of them.
If anyone across the globe has deployed an application named test it has also got a CNAME registered. CNAMEs must be unique for the particular A record and that's why you get the validation falied while trying to creating the test application.
Spawn Worker
CloudHub runs only one mule application per worker. It means, that every time you deploy a mule application a new EC2 instance along with Mule Runtime is created (?) and the application is deployed to the runtime.
Every EC2 instance has its own public IP address, right? So, our workers will also have it. The DNS format of the worker is:
mule-worker-<<APP_NAME>>.region.cloudhub.io
Our ag-mock-1 application will have the below DNS record.
mule-worker-ag-mock-1.us-e2.cloudhub.io
We can dig into this also. Let's check.
See! We have finally found the underlying EC2 instance of the worker. Below is the final diagram.
Port Mapping
When you deploy mule applications in a standalone Mule ESB, there is no limitation on the number of REST APIs that can be deployed. Just, develop each application on a different HTTP port and they will work perfectly fine.
But, the scenario is a bit different in the case of CloudHub. A single Mule application is deployed to a single worker. And the application must use the below HTTP ports.
Below is the mapping between the LB and the actual worker (EC2) instance.
You can access both of the URLs, LB (definitely) & worker URL.
Theoretically, you can configure a different port in the HTTP Listener Configuration in your Mule application, but the application will not be accessible from the LB as well as from the worker (EC2 instance).
I deployed the ag-mock-8085 application on port 8085. I could deploy it successfully but when I tried to invoke it on port 8085 it got timed out after 300 seconds.
So, it seems only 8081 (HTTP) and 8082 (HTTPS) ports are opened for the workers or EC2 instances for TCP connection.
Conclusion
From the above discussions, it's clear that a single worker is meant for running only one Mule application. Let's say an organization has contracted CloudHub for 16 vCores and if they opt for assigning 0.1vCore per application they can have 160 applications deployed and eventually 160 workers (EC2 instances, probably t2 nano) created in AWS.
The RTF has the same concepts. In RTF, a single application is deployed in a single worker which is a Kubernetes pod (instead of EC2 instance) running Mule runtime. The plus point of RTF is that we can have access to the workers (pods) and theoretically we can scale much more than the CloudHub.
That's it for today. Happy learning.